Reducing the Attack Surface through Secure Configuration Management

By Patrick Goldsack

The role of Configured Things Ltd. within the Synergia project is to produce a multi-tenanted platform for configuring and managing IoT devices, the data that flows from devices, and analytics that need to be applied at both the edge and the backend. This includes the ability to securely share data and control, and to be able to delegate the management activities in a secure way.

The basic architecture is illustrated in the diagram:

The system here consists of three roles: the platform owner who can configure new endpoint owners to join the platform, endpoint owners who own the endpoints that connect to the edge and supply data to the data flows, and for good measure a security officer who can force-detach any endpoint that has been identifies as a threat.

This is best illustrated by the deployment of the Synergia platform into Future Space – a space for SMEs and start-ups with many leased offices and laboratories, as well as communal shared spaces such as meeting rooms and open areas. In this deployment Synergia is providing the distributed, low-energy IoT platform consisting of a back-end, connected to multiple edge devices over standard networking technologies, and those in turn securely connected to a myriad of extremely low-power battery-driven wireless IoT devices. These IoT devices, mostly sensors, are installed in a variety of physical locations.

The platform provides to each of the tenants of the building – the endpoint owners – the ability to add new IoT devices and collect and analyse data in a way that is private to them, or shared with their permission with the other tenants. Future Space are considered here to be the Platform Owner and probably the Security Officer.

The configuration of this space is dynamic and complex, and so we have been developing a platform to enable the secure configuration and management of the system, and the stream of changes to that configuration.  This is done in a way that is to be able to handle multiple overlapping and conflicting changes that need to be automatically authorised, validated processed and reconciled.

The approach taken has properties which make it particularly suitable for use in complex, distributed environments such as connected places and other IoT deployments that are the target domains for Synergia.

  • It uses a declarative approach to describing configuration requirements which can be submitted and retracted as required
  • It provides modelling that can be specialised and scoped to each tenant
  • It considers configuration state to be the list of currently authorised configuration requirements (effectively change requests) relative to the notional “safe baseline” state
  • It takes a zero-trust approach to the origin and transport of these change requests

In the architecture diagram, this approach is illustrated as every interaction in both directions (configuration and status) is through the exchange of signed, encrypted (if required as, for example, the configuration contains a secret), and otherwise validated declarative configuration requirements.

Given that in a connected place, configuration change is not simply an infrequent system administration task, it is in effect the currency and interface between systems that need to cooperate. This observation is the driving force behind the consideration of configuration change as the primary requirement. However, whilst the ability of a system to change or be changed by multiple tenants is a key part of its value, it is also a major security problem of the widened attack surface of the system. Misconfiguration, whether unintentional, malicious, or simply because a previously valid change has been invalidated by a change in policy, is the root cause of most security breaches.

The traditional mitigation to this has been to enforce strong change control processes, rigorous testing, and limiting the pool of trusted actors that can initiate change. However, in a complex multi-party ecosystem such as a Connected Place, where systems are inherently decentralised, such approaches are limited in their effectiveness. Such processes are not designed for cross-tenant change management.

The emergence of DevOps has brought with it an increase in the adoption of declarative approaches for service deployment and configuration, which abstract the complexity of how to implement a change away from the specification of the required state. In much the same way that a Satnav only needs to be given the destination and can work out for itself how to get there regardless of the current location, declarative systems accept a definition of the required state (the destination) and work out the set of changes needed to bring the system into alignment (the route).

When implemented correctly, declarative systems are robust (as they hide the complexities of state management), are simpler to interact with than a transactional API (because they allow the user to focus on the required outcome) and provide direct tractability to who requested / authorised the current configuration of the system.

Such declarative systems have now become predominant for infrastructure provisioning (AWS Cloud Formations, Azure Resource manager, Terraform, etc) and for containerised applications (k8s/helm, etc).

However, the current generation of systems all share a number of limitations in this context:

  • They are generally focused on, and limited to, a linear sequence of states where each change is presented to the system as a complete new version of all or some part of the system. In the code analogy underpinning DevOps this is like moving from release to release along the main branch. This creates problems when, for example, it becomes necessary to isolate and remove a single faulty or malicious change. Our approach is more akin to simultaneously managing and releasing from multiple branches.
  • They are ‘release’, rather than ‘change’, focused, and work with relatively static configurations. Where there is dynamic behaviour (e.g., some form of auto scaling) they describe the configuration of that rather than act as the controller.
  • They assume non-overlapping trust domains; while it is best practice to divide the system into stacks for different layers or service areas, each of these then effectively has an owner or owners with full control of that part of the system.
  • There is limited consideration of domains driving sometimes conflicting changes, requiring reconciliation based on policy.
  • There is limited granularity for defining the scope of what can be changed within an area; modules will typically have a fixed set of parameters that can be supplied, values that can be inspected, and may provide default values and some type checking. Within a trust domain this may provide a reasonable level of protection and flexibility, but we need to be able to add finer-grained constraints without having to define a complex authorisation model.

The platform allows us to build declarative systems that overcome these limitations, based on the following principles:

  • The inputs to the platform define the desired outcome of a change request. This can of course be a full description of some part of the system, but it can also be of some subset.
  • Change requests come from several different sources, be it people or other systems, and must be authenticated appropriately.
  • The language used to express a change request should as far as possible limit what can be requested; It’s much harder to make the ‘insecure’ changes if you can’t even describe it. This is an essential aspect to reducing the attack surface and is the analogue of Newspeak in the book 1984, where Orwell writes:

“Don’t you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it.”

  • Change requests may overlap in scope and may be in conflict. The system must resolve these automatically in a deterministic way.
  • Change requests are idempotent – that means that resubmission of a change has no impact beyond that of the original change. This is essential to building robust disaggregated systems, making system recovery considerably easier.
  • It needs to be as easy to remove a change request as it was to make it in the first place, whether that’s in response to a change in intent or a change in permission.
  • Permission to make a change may require multiple parties to authorise it. Maybe someone from each of two cooperating tenants, or perhaps dual sign-off from one organisation to help mitigate against an insider attack. These policies around permission need to be rich enough to represent required authorisation structures, but equally simple enough to be able to reason about the security and to operate them in practice.
  • Changes that span across multiple tenants must be independently agreed by all the parties, and any party to a change may subsequently withdraw permission, and the system should reconfigure itself accordingly. An example might be data sharing between two tenants of a connected place, there must be a matching offer of data and request for data for that sharing to be enacted. Either party could withdraw from the agreement, and at that point the underlying system should immediately remove the capability.
  • Zero trust should apply to changes in the same way that it applies to networks; the trust should be associated with the change itself, and not the transport or origin.

If we look briefly at how the platform meets these principles:

Our inputs are the desired outcome of a change request.
This might seem like a small point, but it underlies much of how the platform works, and is a different paradigm from declarative systems which take new complete models or modules. For a start it means the way in which we derive the new required state is itself stateless. Change requests from different sources can arrive or be removed in any order and at any time, and we re-evaluate the new desired state.

It needs to be as easy to remove a change request as it was to make it in the first place.
This is an important property in a dynamic system, with multiple sources of change. As described above, the only state in our system is the current set of change requests. This has a specific impact when considering the impact of removing a change request – the result of which is always a new calculated state (which considers all subsequent change requests) and avoids any need for complex “undo” handling.

Change requests may overlap in scope and may be in conflict. The system must resolve these automatically in a deterministic and idempotent way.
Like other declarative systems our input is serialised data, in our case an extended form of JSON. Each interface to our platform accepts change requests and a priority, the base level for which may be defined by the specific interface. Whilst this is an extremely simple policy model for resolving conflicts, it appears to be adequate for current needs and more sophisticated approaches are being considered.

Change requests come from several different sources, which might be people or other systems.
Rather than have a single interface used by all sources, with a common authentication and role-based access control model we provide a separate interface for each source. Our API operations are simply “Make this change request” and “Remove this change request”. Each interface has its own constraints on ‘if’ and ‘how’ change requests are passed into the system. Interfaces are added and removed from the system dynamically according to need, which helps keep the attack surface small.

The language used to express a change request should as far as possible limit what can be requested
It is also possible to apply a range of rich schema constraints on each interface, to limit the scope of change requests it can process. Further, each interface is configured to limit the scope of the change requests it accepts by specifying the root object against which they will be applied. In this way it is not possible for an interface accepting change requests for sensors to accidentally expose the capability to modify network settings.

The remaining principles relate to our security model:

Authentication and non-repudiation:  Each change request can include one or more cryptographic signatures, which both verify the author(s) of the change request and the integrity of its content. For a change to be accepted the set of signatures must match the rules for that interface, which can for example be “Any of Alice, Bob or Charlie”, “At least two of …”, “Any signature from this Organisation”, etc.

Note that the authentication is against each change request. It does not otherwise depend on the origin, session, or transport used to bring the request to the system, which meets our Zero Trust principle.

Authentication is performed by each interface against its own specific policy. Change requests which do not meet the policy are rejected. A change in policy always results in the re-authentication of all the change requests submitted via that interface (remember change requests are our only state), so any change in policy always takes immediate effect, and results in a new desired state that conforms to the policy (i.e., is only derived from authenticated change requests). This works well alongside certificate revocation, for example, and provides all the mechanisms for seamless frequent roll-over of keys. This can allow for the use of short-lived certificates.

Authorisation:  Processing a change request is a merge of two data structures, the change request itself and the result of merging any higher priority change requests. Such an operation can create new values or update existing values.

In a comparable system with a REST API there would be operations for each object type with a corresponding RBAC rule to be configured to describe the permitted operations, the scope of which is a predetermined trade-off between granularity of control and complexity of rules.

In our platform this is replaced by constraints which can be placed at any point in the data structure to define under which conditions that part of the structure can be updated, extended, read, or referenced during the merge. The authorisation for these constraints is based on the signatures in the change request. Note that because it is embedded in the data structure the authorisation policy is itself part of the desired state of the system. So, for example a higher priority (processed first) change request can add or modify authorisation policy to some part of the system that is then enforced against lower priority change requests. As any change in the set of change requests results in a re-calculation of the desired state, any change in the authorisation policy is always applied immediately and the effect of any now-unauthorised changes are nullified.

The above probably paints a picture of some form of centralised policy / resolution engine, but what we implement is a mesh of such systems which exchange models with each other. And how do we describe and control what the system looks like? That too is just another form of configuration, so we use the same language and tooling to deploy and manage our system. In QA speak, we are customer zero of our product.

In Synergia we recently demonstrated this approach through three types of role, described n the diagram and each of which acts independently, yet collectively to control the overall state of the system: a Platform Operator who configures the IoT radio network and adds other roles to the system; one or more Endpoint Owners who configure devices and their associated data flows; and a Security Officer who can selectively disable devices that are perceived to have become a threat to the system. Each of the roles is an organisation who may have members with specific tasks and permissions. We enable each organisation to specify “who may change what” from that, and indeed other, organisations.

You can see the video here on our YouTube channel:  https://youtu.be/yHa0g9LQsrQ

Posted in News

SYNERGIA Interim Demonstrator Completed

By Aftab Khan

The SYNERGIA project successfully completed the Interim Demonstrator to its Advisory Group on 25 January 2022.  The details and videos of the presentation are available here.

Posted in News

Detecting Network Intrusions at the Edge

By Dan Howarth

One of the core work packages in the Synergia project is Distributed Intrusion Detection System for the Protection of the Edge, which is focussed on how we can detect network intrusions on resource-constrained edge devices in an IOT network.

The work was led by Smartia, one of the UK’s leading Industrial AI & IOT technology companies. IoT security is a particular focus of Smartia’s research as the adoption of its industrial intelligence platform, MAIO, accelerates.

This blog looks at the machine learning approach we used to detect these network intrusions, and why we chose this approach.

Machine Learning Approach

A machine learning model is typically trained on data that is representative of data it can expect to see when tasked with making predictions. In our case, this is data of the operating system’s activities before, during and after a ‘container escape’ – an event where software that is used to deploy applications – the container – actually hosts a malicious program that breaks out and attacks the edge device.

Our chosen modelling approach would need to meet the following requirements:

  • It should be appropriate to the data we were collecting – in particular, the dataset collected by the University of Bristol had a relatively small amount of data for container events compared to data for normal, non-attack conditions;
  • It needed to be deployable on an edge device – this means it needed to be fairly small in terms of memory so that it can fit on an edge device and not consume too much power in its execution.
  • Finally, it needed to be accurate – we wanted an approach that was powerful and therefore more likely to succeed, as well as something that offered us a lot of flexibility and scope to fine tune and squeeze out as much performance as possible from the model.

Autoencoder

The approach we felt best met these requirements was an autoencoder.

Autoencoder

Figure 1
https://commons.wikimedia.org/wiki/File:Autoencoder_schema.png

An autoencoder is used to convert a set of data into a smaller representation of itself. It does this by removing noise from the data and focusing on the core dimensions of the data. This encoding can be useful in its own right as a way of compressing large dimensional data into something more manageable for further analysis or modelling activity.

However, we additionally decode this representation using the autoencoder to try and recreate the original data passed to the model. The difference between the original and reconstructed data is captured by a reconstruction error. The lower the reconstruction error, the better able the model is to reconstruct the data.

Figure 1 sets out the high level architecture for an autoencoder.

Semi-supervised Learning

Autoencoders have a variety of uses; in our case, it enabled us to tackle the dataset requirement by adopting a semi-supervised approach – which is designed to deal with situations where there is only a small amount of labelled data.

Our autoencoder is trained on normal (non-event data) only. It is trained until its reconstruction error is very low, so that we are confident that it can reconstruct the encoded normal data passed to it at the decoding stage. When ‘container escape’ data is passed to it, it should return a high reconstruction error – that is, it is unable to effectively reconstruct this data because it is sufficiently different from the normal data.

Flexibility

An autoencoder is a very flexible approach. We are able to implement a wide range of architectures for the encoder and decoder to find the best performance (for example, varying the size and number of layers within the model). And, because it is part of the neural network family of models, it is well supported in machine learning software libraries, making design and implementation straightforward.

Additionally, because of this flexibility and by keeping the architecture small, we are able to design a model that can meet edge deployment constraints.

Results

Confusion Matrix

Figure 2

The result of all this approach was a model that was very accurate. Following model training, we tested it on unseen data and applied a threshold to the reconstruction error so that any score above the threshold was classified as an anomaly (attack).

The confusion matrix in Figure 2 shows how well the model performed on the test set. As we can see, it predicted all anomalies correctly and almost all of the normal data too.

This meant an accuracy of over 99%, which is the final piece of evidence that the approach we took was able to meet our requirements.

Posted in News

SYNERGIA Smart Building Use Case and Prototype

By Joshua Acanthe

SYNERGIA is creating a secure platform for the next generation of resource-constrained IoT devices and in order to demonstrate that, we have been developing one example of such a device.

Picture of City

Future Space is a dedicated facility for over 40 start-up businesses requiring work and lab space in Bristol. This presents an interesting challenge of how to manage IoT data within a multi-tenancy system. We need to make sure only those with authorisation can access the data. What is this data? How can it be used? In this post, we will find out.

Three important pieces of data for a smart building would be;

  • Energy usage
  • Space usage
  • Environmental Monitoring

Energy Usage monitors the Lights, Heating and Ventilation to provide insight into the building’s carbon footprint. Data on energy usage can drive down operational costs for the building but can also help combat global warming through reduced carbon emissions.

Space Usage takes data from motion sensing technology or other means to detect presence and uses that data to find how the building space is used. For example it would be able to flag wasted and unused space as well as space that is always in use throughout the day. This is important and could lead to repurposing this wasted space to become more useful.

Environmental Monitoring collects data such as temperature, humidity and sometimes gas to make sure that these environmental factors are suitable for work. They can also be used to control heating, air conditioning and ventilation.

A prototype has been developed to collect and send this data to the backend, where the data analytics can take place. Devices like these will be connected to the SYNERGIA network to provide data and to help demonstrate the innovative security features that SYNERGIA provides.

Posted in News

SYNERGIA Publicised and Published

By Dr. James Pope

Over the past couple of months the SYNERGIA Project has been busy collaborating and participating in numerous events and developing quality research publications.  The project engaged numerous academic and industry organisations during the following events.

  1. Smart Internet Lab Conference
  2. Bristol CSN Lab – BT Visit
  3. Toshiba UMBRELLA Launch Event
  4. FutureSpace Founder’s Meeting Presentation
Smart Internet Lab Logo

http://www.bristol.ac.uk/engineering/research/smart/smart2021-future-networks-research-conference/

On 23 September, the project presented at the University of Bristol’s SMART: 2021 Future Networks Research Conference.  The conference is a chance for academic and industrial experts to discuss future ambitions and challenges in telecommunications research.  The SYNERGIA virtual presentation had over 50 attendees from industry and academia.

On 6 October, The University of Bristol’s Communication Systems & Networks Group (CSN) hosted a visit with senior members from the BT communications conglomerate.  The visit was conducted in the Merchant Venturers Building, CSN Lab.  The SYNERGIA Project was presented along with several other applied and theoretical communications research projects.  There was particular interest in how the SYNERGIA AI/ML solution generalised beyond the IIoT.

Poster of SYNERGIA at UMBRELLA Launch Event

https://www.eventbrite.co.uk/e/umbrella-launch-event-tickets-176415693087

On 18 October, Toshiba held the UMBRELLA Launch Event on the University of the West of England’s Frenchay Campus.  There were over 70 attendees from industry, academia, and local government.  The SYNERGIA Project engaged with attendees in a booth / short discussion format.  Numerous attendees approached SYNERGIA project members to discuss our AI/ML, system, security, and multi-tenancy research.

On 21 October, the SYNERGIA Project met with Future Space companies to discuss possible use cases and collaboration.  There were approximately twelve attendees, half of them at the executive level.  The meeting was facilitated and supported by Oxford Innovation.  Two Future Space companies have subsequently been in contact regarding potential future collaboration.

Future Space image

https://www.futurespacebristol.co.uk/

Finally, the SYNERGIA Project received notification that one of its publication had been accepted as part of an upcoming Association for Computing Machinery (ACM) Conference on 17 November.  The publication includes 14 project members across 3 consortium organisations.  The publication is available via the ACM Digital Library.

Posted in News

How the SYNERGIA project supports COP26 objectives

By Mark Davies: With the UN Climate Change Conference taking place in Glasgow in November 2021, we take a look at how the SYNERGIA project will support the conference’s goals.

COP26 will focus on four major objectives:

  1. Secure global net zero by mid-century and keep 1.5 degrees within reach
  2. Adapt to protect communities and natural habitats.
  3. Mobilise finance
  4. Work together to deliver

Cities, whilst only cover 3% of the earth’s surface, consume 78% of the worlds energy and produce and account for over 60% of global emissions. The goal of connected places not only enhances the quality of living for its citizens by using data to improve its operations including transportation, public services, utilities and infrastructure, but will also support the environmental changes needed to achieve these aggressive net-zero targets being set over the next decade.

Connecting and integrating services and systems within large scale, multi-tenanted environments, brings huge challenges as organizational boundaries are crossed, not least the security aspects. The Centre for the Protection of National Infrastructure (CPNI) commissioned the PAS 185 framework along with the BSI to specify a “security minded approach” to the implementation and establishment of Smart Cities.

The SYNERGIA project has the potential to help address some of these challenges to support the implementation of a secure smart city environment through its work on the development and implementation of a secure platform, making it easier for central and local governments to achieve their goals.

SYNERGIA, consisting of a consortium led by Toshiba and includes the University of Bristol, Ioetec, Smartia, MAC Ltd and Configured Things, is developing and will demonstrate a novel secure-by-design, endpoint-to-core IoT platform for large-scale networks of low-power resource-constrained devices. This novel solution will help to keep and monitor IoT devices and the data they create secure, whilst connected to the network.

Smart cities can help to

  • Reduce the levels of carbon emissions, e.g. Currently, the transport sector makes up 14% of global greenhouse emissions. As more people gravitate towards them, the population of cities are set to grow to over two thirds of the world’s population by 2050. Transportation in urban areas will increase which, if not addressed, will result in a continued rise in emissions. IoT data to improve traffic management, shared transport and improved parking can assist a more efficient movement of a city’s residents and visitors.
  • Protect communities & natural habitats with improved environmental monitoring, optimised services like waste collection and by using smart building solutions
  • Improve the efficient use of energy and water resources contributing to a more sustainable society.
  • Establish safe and secure cross-operational data-sharing to bring disparate stakeholders together to deliver better outcomes for its inhabitants.

To perform these tasks effectively, smart cities will have millions of connected IoT devices but crucially they cannot be implemented fully without effective and robust cybersecurity.

As part of the UKRI’s Strategic Priorities Fund and of the Security of Digital Technologies at the Periphery (SDTAP) programme, SYNERGIA is addressing the challenge of a near-to-market secure and energy-efficient IoT system in resource constrained environments such as smart cities.  Incorporating, utilising and combining technologies in a novel way, the SYNERGIA platform, supports the new NCSC’s “Connected Places Cyber Security Principles” to enable key stakeholders to enhance the quality of living for its citizens, improve the co-operation between siloed sectors and to achieve those targets promised at this year’s conference.

Posted in News

SYNERGIA meets SDTaP’s IAC

By Theo Spyridopoulos: On the 15th of July, the Industrial Advisory Committee of the Security of Digital Technology at the Periphery (SDTaP) programme met, to hear updates from the three Demonstrator projects in Round 1:

  • i-TRACE: IoT Transport Assured for Critical Environments, a collaboration between the University of Warwick, Cisco, BT, Senseon, and Costain working with Artificial Intelliegence and Distributed Ledger technologies.
  • Secure-CAVs: The world’s first on-chip and in-life monitoring solution to rapidly detect cyber security threats in Connected and Autonomous Vehicles (CAVs), a collaboration between the Coventry, Southampton, Siemens, and Copper Horse.
  • ManySecured: Collaborative development of Secure IoT Gateways & Routers, a collaboration between Cisco, NquiringMinds, the University of Oxford, and our friends at the IoT Security Foundation.

and from Round 2:

  • SYNERGIA: Secure bY desigN End to end platfoRm for larGe scale resource constrained Iot Applications, a collaboration between Toshiba’s Bristol R&D Lab, Configured Things, Ioetec, MAC Ltd, and Smartia.

In addition, we heard from two projects, led by PETRAS researchers, funded under SDTaP’s commercialisation stream through CyberASAP (Cyber Security Academic Startup Accelerator Programme), the only accelerator programme in the cybersecurity ecosystem for pre-seed funding:

  1. TAIMAS: Timing Anomalies as an Indicator of Mal-Intervention in Automation Systems (UCL and CUBE 2 Ltd in Worthing)
  2. THuVA: Improving Security with Techno-Human Vulnerability Analysis (UCL)

SYNERGIA falls under Theme 2 “Secure and energy-efficient IoT systems in resource-constrained environments” of InnovateUK’s “Demonstrators addressing cyber security challenges in the Internet of Things” round 2 call and focuses on end-to-end cyber security for IoT systems with resource-constrained devices. It involves Involves AI as part of the security detection and mitigation mechanism at the Edge and plans to demonstrate the results in a real environment based on an existing Edge IoT platform. Similar challenges and areas of interest, especially in the field of AI at the Edge Gateway and Secure Configuration Management of thousands of IoT devices at the Edge, were also identified during the meeting. Project TAIMAS in particular, uses autoencoders for anomaly detection to perform intrusion detection in Building Automation Systems in a similar way to us. In SYNERGIA we push the detection to the Edge providing a human-in-the-loop under a Federated Learning Architecture to improve the model’s performance in case of low confidence outputs.

SYNERGIA focuses on a secure-by-design end-to-end platform for large scale resource-constrained IoT applications. We follow a three-tier architecture that includes i) the resource-constrained Endpoint Tier where battery-powered sensor devices are scattered in the field, ii) the Edge Tier that is geographically located close to the Endpoints and is responsible for collecting the sensor data and provide processing capabilities used for data analytics and system configuration management at the Edge and iii) the Back End Tier that is responsible for aggregating the processed data from the Edge Tier and providing a User Interface to End-users.

To inform the design of our security solutions, we conducted a threat analysis for the whole end-to-end system based on NIST’s threat modelling process in the 800-30 special publication. The main threats we are interested in revolve around unauthorised/malevolent users, services and devices trying to access or disrupt our system, targeting the Endpoint and Edge Tiers. To address these threats, we develop a series of security solutions operating at the two Tiers.

Similarly to the TAIMAS project, SYNERGIA uses an autoencoder running at the Edge to model the Edge device’s normal behaviour and detect abnormal behaviours. To improve the model’s performance, we use a human-in-the-loop approach under a Federated Learning architecture, providing a user interface for security experts to extract system data corresponding to low confidence model inferences for external analysis and data labelling. We also employ AI deployed at the Edge to detect malicious drifts in the data collected from the Endpoint devices.

A point raised during the meeting was the challenge of configuring and managing thousands of Endpoint devices scattered in the field; Intel has faced this issue with IoT deployments in the US. The existence of multiple actors and devices with different roles and owners respectively requires dynamic configuration management and control of the IoT. Providing this closer to the Endpoint Tier improves scalability as well as security and user privacy. In SYNERGIA, we address this challenge by delivering secure configuration and management of Endpoints, as well as secure Endpoint data processing through signed data flows deployed at the Edge.

SYNERGIA security is targeted at multiple resource constrained IoT for Smart Cities applications, and will demonstrate the solutions developed in just one particular Use Case: securing “Multi-tenancy Smart Buildings”. Working with Oxford Innovation (https://oxin.co.uk/), a number of Edge nodes and Endpoint sensors will be installed in the Future Space multi-tenancy building http://www.futurespacebristol.co.uk providing environmental monitoring, weather monitoring, green energy, and access control services etc. Synergia’s solutions will allow the building operator to deploy solutions around Variable billing based on room utilisation, heating, cooling etc. and also allow users a “Bring your own IoT device” policy. Furthermore, it will enable space users to ensure compliance with investors’ Environmental, Social and Corporate Governance policies.

Posted in News

Cyber Security of Connected Places

By Simon Arnell:  The UK’s National Cyber Security Centre recently released its “Connected Places: Cyber Security Principles” guidance document to advance the state of security in connected places.

Increasingly, systems that would have previously been considered SCADA systems are now starting to appear in all sorts of new applications using commodity hardware of unknown origin and risk; little to no air gapping exists between these new forms of critical systems allowing potential attacks to spread. Additionally, security cannot be assumed to be inherent in the acquired devices unless care is taken during procurement or the cost of system design accommodates a great deal of focus on security. Therefore it is critical to understand your connected place and the risks associated with it in the event of it being compromised. 

The SYNERGIA project was formed to investigate the challenge of how to provide “secure and energy-efficient IoT systems in resource-constrained environments.” So you may ask, what is a “resource-constrained environment” and why do they require special security consideration? We characterise these as systems that rely on battery power and low-power wireless networking technologies. Resource-constrained devices may not have the compute capabilities to perform otherwise standard cryptography or full networking stacks – instead relying on lightweight alternatives.  

A connected place should be designed to be secure – not allowed to grow organically with ill-fitting security bolted on. A data-centric end-to-end approach is needed to protect data throughout its lifecycle across every part of the network. 

The sorts of applications we would see resource-constrained systems being applied to are ones with multi-year lifetimes such as precision agriculture, smart buildings, smart logistics, smart cities and smart countryside. By their very nature, devices and the network are exposed to the public and therefore have to be assumed to be in hostile environments and potentially compromised. 

Therefore these systems must be designed and implemented to be cyber resilient, the reverse engineering of any one device should not lead to the entire system being compromised. Data should also be protected at rest and in motion – despite operating on a compromised network the data should not be readable or subject to undetectable changes and replays. Likewise the system should be able to detect and respond to attacks, with strong recovery properties that enable it to return to a secure default state.

The SYNERGIA project is now in its ninth month and we are into our full collective swing of design work, we look forward to sharing the outputs of which once we move into development and testing stages of the project where we will operationalise the security of the connected place.  The project will have the first of two demonstration events on 31 January 2022. 

Posted in News

SYNERGIA has kicked-off

The SYNERGIA Innovate UK-funded collaborative project has kicked-off.

Funded by Innovate UK under the “Demonstrators addressing cyber security challenges in the Internet of Things: round 2” competition, the 6-partner SYNERGIA consortium will devise, develop and demonstrate a novel secure-by-design, endpoint-to-core IoT platform for large-scale networks of low-power resource-constrained devices.

We are currently in the process of organising an end-user engagement workshop.

Bookmark this page for news and follow @ProjectSynergia on twitter.

 

Posted in News

Quick Facts

Funder: Innovate UK
Project Cost: £2.2M
Total Funding: £1.6M