Next Steps for SYNERGIA

By Mark Davies

As the SYNERGIA project approaches its two-year mark all those involved are focussing on how to capitalise on the work conducted and what are the next steps.  For Ioetec, who are cyber-specialists in securing IoT data, we have been looking at real world deployments and commercial use cases of our technology on the Toshiba Umbrella test bed, which is a network of over 230 nodes containing 1500+ sensors spread across 7km of South Gloucestershire

A golden opportunity presented itself recently with a second Umbrella competition led by South Gloucester Council, the West of England LEP, the West of England Combined Authority and of course Toshiba funded through the UK Government’s UK Community Renewal Fund. The purpose of the competition is to address key national or regional challenges such as improving biodiversity and the environment, achieving zero carbon targets in public buildings, actions that meet the needs of an ageing population and using technology to support council services.

This challenge fitted well within Ioetec’s area of activity and, following a successful application, Ioetec has begun development for the “Environmental monitoring for Social Housing IoT” (SocH-IoT)

Social Housing faces a monumental challenge. The clock is ticking, not only to meet net zero carbon by 2050, but also to achieve an EPC “C” rating across all homes by 2030. With 16,000 housing associations managing around 2.7million homes and 352 local authorities there’s a lot of ground to cover. Currently, housing accounts for about a fifth of all UK greenhouse gas emissions. This is largely from the oil & gas used for heating and hot water, with around 10% coming from the social housing sector. Coupled with the current energy crisis, there has never been a more urgent challenge.

Significant hurdles restrict the ability to retrofit at scale & pace, with cost the most obvious cause for concern. However, a huge obstruction is the lack of fundamental data known about the housing stock, it just hasn’t been a priority. The measurement of existing and resultant energy usage during and after reduction measures have been adopted is a key aspect in understanding the effectiveness of the retrofits.

The SocH-IoT project will support this with an innovative approach to the design of sensors and collection of data. Current solutions have three major drawbacks, the first is the disparate range of potential hardware solutions which do not capture data in a consistent way and the second is that the data delivery systems are not flexible enough to support a wide variety of analytics platforms. The third, is the lack of a cost effective and secure collection and distribution platform, which is addressed by the Umbrella solution. The SocH-IoT project address these challenges and will support the reduction of domestic energy usage.

Ioetec will further develop our range of experimental modular sensor units to support a variety of add-on components to measure temperature, humidity, pressure, air quality, CO2, electricity usage, gas usage, outside weather conditions and occupancy.  Our existing software platform, together with low power upgrades jointly developed with SYNERGIA partner the University of Bristol, will be enhanced to capture data from these sensors and deliver it securely to a central Umbrella hub. A variety of energy efficient Umbrella connectivity solutions will be investigated including, WiFi, Bluetooth and Lora. The Umbrella hub will authorize each sensor and then collect data where it can be consolidated before being delivered to a remote data collection service.

The collection and delivery service must not rely on customer connectivity so Umbrella is an ideal platform and removes the necessity to employ expensive solutions such as 3G/4G, although these do remain an option. The analytics platform can either be provided by Ioetec or preferably by local and central government agencies. The Ioetec service has an option to format the output data to a number of protocols and this will be enhanced to support the required destination platform requirements.

This design includes several innovative approaches, including modular design to reduce costs, the addition of authentication and secure data delivery, and the delivery of collected data in a consistent format to a variety of pre-existing analytics engines. These approaches solve the existing challenges of multiple sensors with different capabilities delivering to individual user or supplier focused analytics engines, which in turn prevents central agencies being able to generate consistent data to establish regional and national patterns.

Using our experience gained through the SYNERGIA project, we are familiar with the Umbrella node and its capabilities. The available connectivity options will allow connection of our sensors, and the ability to ‘containerise’ software will allow us to provision our software and its dependencies. The ability to collect data and transfer it to a central location, as provided by Umbrella is a key enabler to allow testing in a Smart City environment.

The SYNERGIA team is working towards showcasing their collective technologies in our demonstration day scheduled for the 31st October 2022, so if you have an interest in cybersecurity in the Smart City environment please do contact us at info@ioetec.com

More information on the Final Demo can be found at: https://www.eventbrite.co.uk/e/synergia-demonstration-event-tickets-402311162517

 

Posted in News

Meeting the Challenges of the SYNERGIA IoT Endpoint Design

By Joshua Acanthe

To provide a realistic demonstration of SYNERGIA’s innovative security solutions for IoT networks, MAC Ltd developed a custom endpoint device that integrates several environmental sensors and wireless connectivity into a form factor representative of common IoT devices. Originally a bespoke endpoint was to be created with a circuit board that incorporated the nrf52840 radio-enabled microprocessor chip along with sensors to measure temperature, humidity, gas, light level and to detect movement for tamper detection.

We created a circuit schematic based on our prototype board containing the microcontroller connected to various sensing elements via an I2C bus, removing all unnecessary components.

From there we could create a physical printed circuit board layout to be manufactured by external suppliers. Ensuring that the mounting holes are correctly placed, and that the I2C bus is not too long on the physical board, are important considerations when converting a schematic to a board layout.

However, sourcing discrete nrf52840 chips rapidly was a significant challenge in the current semiconductor shortage. We met the challenge by amending the design to use nrf52840 dongles as these were readily available. This meant we had to be creative in our design approach to keep the size of the endpoint small whilst using the dongles, rather than just the microcontroller device itself. We decided that it was important to keep an element of flexibility, and we needed to maintain access to the USB port on the dongles to allow us to flash these devices with updated firmware. Therefore, in order to allow the removal of the dongle and access to the USB port, we soldered header pins that allowed us to keep the USB dongle at a minimal distance from the main board (see photograph).

Another challenge we encountered during the design process was that the original case design did not allow sufficient light to strike the light sensor, which meant there was not a significant difference between reported sensor values when the room light was on or off. In order to remedy this, we used LED light pipes to increase the light flow into the sensor.  These pipes are usually used to magnify light emitted from LEDs. However, can also act as a lens focussing light from outside the case onto the light sensor inside. While not perfect, this was sufficient to reliably detect whether the external light in the room was on or off.

 

Having manufactured several of these IoT endpoint sensors as part of the SYNERGIA project, they are now being deployed in Future Space in Bristol to collect data to monitor energy usage, ventilation, and activity. They form an integral part of the formal demonstration of SYNERGIA’s innovations at Future Space Bristol on Monday 31st October 2022. www.futurespacebristol.co.uk

For more information contact Joshua Acanthe joshua.acanthe@macltd.com and register for the Final Demo at: https://www.eventbrite.co.uk/e/synergia-demonstration-event-tickets-402311162517

Posted in News

SYNERGIA Industrial and Academic Engagement

By Francesco Raimondo

There’s been a lot of external interest in the SYNERGIA project in the last few months.

PETRAS, the National Centre of Excellence for IoT Cybersecurity interviewed some of the SYNERGIA team, alongside other projects in the Security of Digital Technology at the Periphery (SDTaP) programme. The interview was focused on the relationship between research and industry and discussed both challenges and opportunities. During the interview with SYNERGIA, we identified opportunities, for example, the industry-academia relationship enables academic research to have significant impact by working with industrial partners on products and services that can quickly be brought to market. On the other hand, industry can access the latest innovative approaches in research institutions to improve their products and services. The challenges arise from different attitudes to intellectual property between academia and industry, and social distancing due to Covid19, has made building close relationships more difficult in recent times.

SYNERGIA was one of the few projects to provide a progress update and recent results to an online meeting of the SDTaP’s Independent Advisory Committee (IAC). During the two-hour meeting Innovate UK also provided updates on relevant UK Government initiatives, and there was also a discussion about the future activities of SDTaP and IAC.

The SYNERGIA project was also a topic for discussion when BT communications visited the University of Bristol (UoB) at the Merchant Venture building (MVB). It was a valuable occasion for feedback from IoT and communications experts from both BT and UoB.

In April, a group from the Engineering and Physical Sciences Research Council (EPSRC) visited the SMART Internet Lab, at the MVB. SYNERGIA was featured alongside presentations on Secure Platform for Cloud-IoT Applications, AI For Future Wireless Networks, methods for improving the energy efficiency and linearity of communication transmitters and High Efficiency Broadband Power Amplifiers.

This month, the University of Bristol’s Communication Systems & Networks Group (CSN) showcase event also included presentations on SYNERGIA to researchers and PhD students working in the CSN group.

The research collaboration between SYNERGIA company partners continues, with several research publications in preparation and a series of research works. Research topics include Data Drift Detectors for Resource-Constrained IoT Devices: about our work in investigating a framework with low computational requirements for the detection of sensor’s data-drift. On a different front, the experience gained developing the SYNERGIA Machine Learning model for detection of anomalies in sensor data streams was the starting point for further developments soon to be reported in papers on human-in-the-loop intrusion Detection System using Federated Learning, and Efficient Audit Representation for Anomaly Detection using Graph Neural Networks.  Watch this space!

Posted in News

Reducing the Attack Surface through Secure Configuration Management

By Patrick Goldsack

The role of Configured Things Ltd. within the Synergia project is to produce a multi-tenanted platform for configuring and managing IoT devices, the data that flows from devices, and analytics that need to be applied at both the edge and the backend. This includes the ability to securely share data and control, and to be able to delegate the management activities in a secure way.

The basic architecture is illustrated in the diagram:

The system here consists of three roles: the platform owner who can configure new endpoint owners to join the platform, endpoint owners who own the endpoints that connect to the edge and supply data to the data flows, and for good measure a security officer who can force-detach any endpoint that has been identifies as a threat.

This is best illustrated by the deployment of the Synergia platform into Future Space – a space for SMEs and start-ups with many leased offices and laboratories, as well as communal shared spaces such as meeting rooms and open areas. In this deployment Synergia is providing the distributed, low-energy IoT platform consisting of a back-end, connected to multiple edge devices over standard networking technologies, and those in turn securely connected to a myriad of extremely low-power battery-driven wireless IoT devices. These IoT devices, mostly sensors, are installed in a variety of physical locations.

The platform provides to each of the tenants of the building – the endpoint owners – the ability to add new IoT devices and collect and analyse data in a way that is private to them, or shared with their permission with the other tenants. Future Space are considered here to be the Platform Owner and probably the Security Officer.

The configuration of this space is dynamic and complex, and so we have been developing a platform to enable the secure configuration and management of the system, and the stream of changes to that configuration.  This is done in a way that is to be able to handle multiple overlapping and conflicting changes that need to be automatically authorised, validated processed and reconciled.

The approach taken has properties which make it particularly suitable for use in complex, distributed environments such as connected places and other IoT deployments that are the target domains for Synergia.

  • It uses a declarative approach to describing configuration requirements which can be submitted and retracted as required
  • It provides modelling that can be specialised and scoped to each tenant
  • It considers configuration state to be the list of currently authorised configuration requirements (effectively change requests) relative to the notional “safe baseline” state
  • It takes a zero-trust approach to the origin and transport of these change requests

In the architecture diagram, this approach is illustrated as every interaction in both directions (configuration and status) is through the exchange of signed, encrypted (if required as, for example, the configuration contains a secret), and otherwise validated declarative configuration requirements.

Given that in a connected place, configuration change is not simply an infrequent system administration task, it is in effect the currency and interface between systems that need to cooperate. This observation is the driving force behind the consideration of configuration change as the primary requirement. However, whilst the ability of a system to change or be changed by multiple tenants is a key part of its value, it is also a major security problem of the widened attack surface of the system. Misconfiguration, whether unintentional, malicious, or simply because a previously valid change has been invalidated by a change in policy, is the root cause of most security breaches.

The traditional mitigation to this has been to enforce strong change control processes, rigorous testing, and limiting the pool of trusted actors that can initiate change. However, in a complex multi-party ecosystem such as a Connected Place, where systems are inherently decentralised, such approaches are limited in their effectiveness. Such processes are not designed for cross-tenant change management.

The emergence of DevOps has brought with it an increase in the adoption of declarative approaches for service deployment and configuration, which abstract the complexity of how to implement a change away from the specification of the required state. In much the same way that a Satnav only needs to be given the destination and can work out for itself how to get there regardless of the current location, declarative systems accept a definition of the required state (the destination) and work out the set of changes needed to bring the system into alignment (the route).

When implemented correctly, declarative systems are robust (as they hide the complexities of state management), are simpler to interact with than a transactional API (because they allow the user to focus on the required outcome) and provide direct tractability to who requested / authorised the current configuration of the system.

Such declarative systems have now become predominant for infrastructure provisioning (AWS Cloud Formations, Azure Resource manager, Terraform, etc) and for containerised applications (k8s/helm, etc).

However, the current generation of systems all share a number of limitations in this context:

  • They are generally focused on, and limited to, a linear sequence of states where each change is presented to the system as a complete new version of all or some part of the system. In the code analogy underpinning DevOps this is like moving from release to release along the main branch. This creates problems when, for example, it becomes necessary to isolate and remove a single faulty or malicious change. Our approach is more akin to simultaneously managing and releasing from multiple branches.
  • They are ‘release’, rather than ‘change’, focused, and work with relatively static configurations. Where there is dynamic behaviour (e.g., some form of auto scaling) they describe the configuration of that rather than act as the controller.
  • They assume non-overlapping trust domains; while it is best practice to divide the system into stacks for different layers or service areas, each of these then effectively has an owner or owners with full control of that part of the system.
  • There is limited consideration of domains driving sometimes conflicting changes, requiring reconciliation based on policy.
  • There is limited granularity for defining the scope of what can be changed within an area; modules will typically have a fixed set of parameters that can be supplied, values that can be inspected, and may provide default values and some type checking. Within a trust domain this may provide a reasonable level of protection and flexibility, but we need to be able to add finer-grained constraints without having to define a complex authorisation model.

The platform allows us to build declarative systems that overcome these limitations, based on the following principles:

  • The inputs to the platform define the desired outcome of a change request. This can of course be a full description of some part of the system, but it can also be of some subset.
  • Change requests come from several different sources, be it people or other systems, and must be authenticated appropriately.
  • The language used to express a change request should as far as possible limit what can be requested; It’s much harder to make the ‘insecure’ changes if you can’t even describe it. This is an essential aspect to reducing the attack surface and is the analogue of Newspeak in the book 1984, where Orwell writes:

“Don’t you see that the whole aim of Newspeak is to narrow the range of thought? In the end we shall make thoughtcrime literally impossible, because there will be no words in which to express it.”

  • Change requests may overlap in scope and may be in conflict. The system must resolve these automatically in a deterministic way.
  • Change requests are idempotent – that means that resubmission of a change has no impact beyond that of the original change. This is essential to building robust disaggregated systems, making system recovery considerably easier.
  • It needs to be as easy to remove a change request as it was to make it in the first place, whether that’s in response to a change in intent or a change in permission.
  • Permission to make a change may require multiple parties to authorise it. Maybe someone from each of two cooperating tenants, or perhaps dual sign-off from one organisation to help mitigate against an insider attack. These policies around permission need to be rich enough to represent required authorisation structures, but equally simple enough to be able to reason about the security and to operate them in practice.
  • Changes that span across multiple tenants must be independently agreed by all the parties, and any party to a change may subsequently withdraw permission, and the system should reconfigure itself accordingly. An example might be data sharing between two tenants of a connected place, there must be a matching offer of data and request for data for that sharing to be enacted. Either party could withdraw from the agreement, and at that point the underlying system should immediately remove the capability.
  • Zero trust should apply to changes in the same way that it applies to networks; the trust should be associated with the change itself, and not the transport or origin.

If we look briefly at how the platform meets these principles:

Our inputs are the desired outcome of a change request.
This might seem like a small point, but it underlies much of how the platform works, and is a different paradigm from declarative systems which take new complete models or modules. For a start it means the way in which we derive the new required state is itself stateless. Change requests from different sources can arrive or be removed in any order and at any time, and we re-evaluate the new desired state.

It needs to be as easy to remove a change request as it was to make it in the first place.
This is an important property in a dynamic system, with multiple sources of change. As described above, the only state in our system is the current set of change requests. This has a specific impact when considering the impact of removing a change request – the result of which is always a new calculated state (which considers all subsequent change requests) and avoids any need for complex “undo” handling.

Change requests may overlap in scope and may be in conflict. The system must resolve these automatically in a deterministic and idempotent way.
Like other declarative systems our input is serialised data, in our case an extended form of JSON. Each interface to our platform accepts change requests and a priority, the base level for which may be defined by the specific interface. Whilst this is an extremely simple policy model for resolving conflicts, it appears to be adequate for current needs and more sophisticated approaches are being considered.

Change requests come from several different sources, which might be people or other systems.
Rather than have a single interface used by all sources, with a common authentication and role-based access control model we provide a separate interface for each source. Our API operations are simply “Make this change request” and “Remove this change request”. Each interface has its own constraints on ‘if’ and ‘how’ change requests are passed into the system. Interfaces are added and removed from the system dynamically according to need, which helps keep the attack surface small.

The language used to express a change request should as far as possible limit what can be requested
It is also possible to apply a range of rich schema constraints on each interface, to limit the scope of change requests it can process. Further, each interface is configured to limit the scope of the change requests it accepts by specifying the root object against which they will be applied. In this way it is not possible for an interface accepting change requests for sensors to accidentally expose the capability to modify network settings.

The remaining principles relate to our security model:

Authentication and non-repudiation:  Each change request can include one or more cryptographic signatures, which both verify the author(s) of the change request and the integrity of its content. For a change to be accepted the set of signatures must match the rules for that interface, which can for example be “Any of Alice, Bob or Charlie”, “At least two of …”, “Any signature from this Organisation”, etc.

Note that the authentication is against each change request. It does not otherwise depend on the origin, session, or transport used to bring the request to the system, which meets our Zero Trust principle.

Authentication is performed by each interface against its own specific policy. Change requests which do not meet the policy are rejected. A change in policy always results in the re-authentication of all the change requests submitted via that interface (remember change requests are our only state), so any change in policy always takes immediate effect, and results in a new desired state that conforms to the policy (i.e., is only derived from authenticated change requests). This works well alongside certificate revocation, for example, and provides all the mechanisms for seamless frequent roll-over of keys. This can allow for the use of short-lived certificates.

Authorisation:  Processing a change request is a merge of two data structures, the change request itself and the result of merging any higher priority change requests. Such an operation can create new values or update existing values.

In a comparable system with a REST API there would be operations for each object type with a corresponding RBAC rule to be configured to describe the permitted operations, the scope of which is a predetermined trade-off between granularity of control and complexity of rules.

In our platform this is replaced by constraints which can be placed at any point in the data structure to define under which conditions that part of the structure can be updated, extended, read, or referenced during the merge. The authorisation for these constraints is based on the signatures in the change request. Note that because it is embedded in the data structure the authorisation policy is itself part of the desired state of the system. So, for example a higher priority (processed first) change request can add or modify authorisation policy to some part of the system that is then enforced against lower priority change requests. As any change in the set of change requests results in a re-calculation of the desired state, any change in the authorisation policy is always applied immediately and the effect of any now-unauthorised changes are nullified.

The above probably paints a picture of some form of centralised policy / resolution engine, but what we implement is a mesh of such systems which exchange models with each other. And how do we describe and control what the system looks like? That too is just another form of configuration, so we use the same language and tooling to deploy and manage our system. In QA speak, we are customer zero of our product.

In Synergia we recently demonstrated this approach through three types of role, described n the diagram and each of which acts independently, yet collectively to control the overall state of the system: a Platform Operator who configures the IoT radio network and adds other roles to the system; one or more Endpoint Owners who configure devices and their associated data flows; and a Security Officer who can selectively disable devices that are perceived to have become a threat to the system. Each of the roles is an organisation who may have members with specific tasks and permissions. We enable each organisation to specify “who may change what” from that, and indeed other, organisations.

You can see the video here on our YouTube channel:  https://youtu.be/yHa0g9LQsrQ

Posted in News

SYNERGIA Interim Demonstrator Completed

By Aftab Khan

The SYNERGIA project successfully completed the Interim Demonstrator to its Advisory Group on 25 January 2022.  The details and videos of the presentation are available here.

Posted in News

Detecting Network Intrusions at the Edge

By Dan Howarth

One of the core work packages in the Synergia project is Distributed Intrusion Detection System for the Protection of the Edge, which is focussed on how we can detect network intrusions on resource-constrained edge devices in an IOT network.

The work was led by Smartia, one of the UK’s leading Industrial AI & IOT technology companies. IoT security is a particular focus of Smartia’s research as the adoption of its industrial intelligence platform, MAIO, accelerates.

This blog looks at the machine learning approach we used to detect these network intrusions, and why we chose this approach.

Machine Learning Approach

A machine learning model is typically trained on data that is representative of data it can expect to see when tasked with making predictions. In our case, this is data of the operating system’s activities before, during and after a ‘container escape’ – an event where software that is used to deploy applications – the container – actually hosts a malicious program that breaks out and attacks the edge device.

Our chosen modelling approach would need to meet the following requirements:

  • It should be appropriate to the data we were collecting – in particular, the dataset collected by the University of Bristol had a relatively small amount of data for container events compared to data for normal, non-attack conditions;
  • It needed to be deployable on an edge device – this means it needed to be fairly small in terms of memory so that it can fit on an edge device and not consume too much power in its execution.
  • Finally, it needed to be accurate – we wanted an approach that was powerful and therefore more likely to succeed, as well as something that offered us a lot of flexibility and scope to fine tune and squeeze out as much performance as possible from the model.

Autoencoder

The approach we felt best met these requirements was an autoencoder.

Autoencoder

Figure 1
https://commons.wikimedia.org/wiki/File:Autoencoder_schema.png

An autoencoder is used to convert a set of data into a smaller representation of itself. It does this by removing noise from the data and focusing on the core dimensions of the data. This encoding can be useful in its own right as a way of compressing large dimensional data into something more manageable for further analysis or modelling activity.

However, we additionally decode this representation using the autoencoder to try and recreate the original data passed to the model. The difference between the original and reconstructed data is captured by a reconstruction error. The lower the reconstruction error, the better able the model is to reconstruct the data.

Figure 1 sets out the high level architecture for an autoencoder.

Semi-supervised Learning

Autoencoders have a variety of uses; in our case, it enabled us to tackle the dataset requirement by adopting a semi-supervised approach – which is designed to deal with situations where there is only a small amount of labelled data.

Our autoencoder is trained on normal (non-event data) only. It is trained until its reconstruction error is very low, so that we are confident that it can reconstruct the encoded normal data passed to it at the decoding stage. When ‘container escape’ data is passed to it, it should return a high reconstruction error – that is, it is unable to effectively reconstruct this data because it is sufficiently different from the normal data.

Flexibility

An autoencoder is a very flexible approach. We are able to implement a wide range of architectures for the encoder and decoder to find the best performance (for example, varying the size and number of layers within the model). And, because it is part of the neural network family of models, it is well supported in machine learning software libraries, making design and implementation straightforward.

Additionally, because of this flexibility and by keeping the architecture small, we are able to design a model that can meet edge deployment constraints.

Results

Confusion Matrix

Figure 2

The result of all this approach was a model that was very accurate. Following model training, we tested it on unseen data and applied a threshold to the reconstruction error so that any score above the threshold was classified as an anomaly (attack).

The confusion matrix in Figure 2 shows how well the model performed on the test set. As we can see, it predicted all anomalies correctly and almost all of the normal data too.

This meant an accuracy of over 99%, which is the final piece of evidence that the approach we took was able to meet our requirements.

Posted in News

SYNERGIA Smart Building Use Case and Prototype

By Joshua Acanthe

SYNERGIA is creating a secure platform for the next generation of resource-constrained IoT devices and in order to demonstrate that, we have been developing one example of such a device.

Picture of City

Future Space is a dedicated facility for over 40 start-up businesses requiring work and lab space in Bristol. This presents an interesting challenge of how to manage IoT data within a multi-tenancy system. We need to make sure only those with authorisation can access the data. What is this data? How can it be used? In this post, we will find out.

Three important pieces of data for a smart building would be;

  • Energy usage
  • Space usage
  • Environmental Monitoring

Energy Usage monitors the Lights, Heating and Ventilation to provide insight into the building’s carbon footprint. Data on energy usage can drive down operational costs for the building but can also help combat global warming through reduced carbon emissions.

Space Usage takes data from motion sensing technology or other means to detect presence and uses that data to find how the building space is used. For example it would be able to flag wasted and unused space as well as space that is always in use throughout the day. This is important and could lead to repurposing this wasted space to become more useful.

Environmental Monitoring collects data such as temperature, humidity and sometimes gas to make sure that these environmental factors are suitable for work. They can also be used to control heating, air conditioning and ventilation.

A prototype has been developed to collect and send this data to the backend, where the data analytics can take place. Devices like these will be connected to the SYNERGIA network to provide data and to help demonstrate the innovative security features that SYNERGIA provides.

Posted in News

SYNERGIA Publicised and Published

By Dr. James Pope

Over the past couple of months the SYNERGIA Project has been busy collaborating and participating in numerous events and developing quality research publications.  The project engaged numerous academic and industry organisations during the following events.

  1. Smart Internet Lab Conference
  2. Bristol CSN Lab – BT Visit
  3. Toshiba UMBRELLA Launch Event
  4. FutureSpace Founder’s Meeting Presentation
Smart Internet Lab Logo

http://www.bristol.ac.uk/engineering/research/smart/smart2021-future-networks-research-conference/

On 23 September, the project presented at the University of Bristol’s SMART: 2021 Future Networks Research Conference.  The conference is a chance for academic and industrial experts to discuss future ambitions and challenges in telecommunications research.  The SYNERGIA virtual presentation had over 50 attendees from industry and academia.

On 6 October, The University of Bristol’s Communication Systems & Networks Group (CSN) hosted a visit with senior members from the BT communications conglomerate.  The visit was conducted in the Merchant Venturers Building, CSN Lab.  The SYNERGIA Project was presented along with several other applied and theoretical communications research projects.  There was particular interest in how the SYNERGIA AI/ML solution generalised beyond the IIoT.

Poster of SYNERGIA at UMBRELLA Launch Event

https://www.eventbrite.co.uk/e/umbrella-launch-event-tickets-176415693087

On 18 October, Toshiba held the UMBRELLA Launch Event on the University of the West of England’s Frenchay Campus.  There were over 70 attendees from industry, academia, and local government.  The SYNERGIA Project engaged with attendees in a booth / short discussion format.  Numerous attendees approached SYNERGIA project members to discuss our AI/ML, system, security, and multi-tenancy research.

On 21 October, the SYNERGIA Project met with Future Space companies to discuss possible use cases and collaboration.  There were approximately twelve attendees, half of them at the executive level.  The meeting was facilitated and supported by Oxford Innovation.  Two Future Space companies have subsequently been in contact regarding potential future collaboration.

Future Space image

https://www.futurespacebristol.co.uk/

Finally, the SYNERGIA Project received notification that one of its publication had been accepted as part of an upcoming Association for Computing Machinery (ACM) Conference on 17 November.  The publication includes 14 project members across 3 consortium organisations.  The publication is available via the ACM Digital Library.

Posted in News

How the SYNERGIA project supports COP26 objectives

By Mark Davies: With the UN Climate Change Conference taking place in Glasgow in November 2021, we take a look at how the SYNERGIA project will support the conference’s goals.

COP26 will focus on four major objectives:

  1. Secure global net zero by mid-century and keep 1.5 degrees within reach
  2. Adapt to protect communities and natural habitats.
  3. Mobilise finance
  4. Work together to deliver

Cities, whilst only cover 3% of the earth’s surface, consume 78% of the worlds energy and produce and account for over 60% of global emissions. The goal of connected places not only enhances the quality of living for its citizens by using data to improve its operations including transportation, public services, utilities and infrastructure, but will also support the environmental changes needed to achieve these aggressive net-zero targets being set over the next decade.

Connecting and integrating services and systems within large scale, multi-tenanted environments, brings huge challenges as organizational boundaries are crossed, not least the security aspects. The Centre for the Protection of National Infrastructure (CPNI) commissioned the PAS 185 framework along with the BSI to specify a “security minded approach” to the implementation and establishment of Smart Cities.

The SYNERGIA project has the potential to help address some of these challenges to support the implementation of a secure smart city environment through its work on the development and implementation of a secure platform, making it easier for central and local governments to achieve their goals.

SYNERGIA, consisting of a consortium led by Toshiba and includes the University of Bristol, Ioetec, Smartia, MAC Ltd and Configured Things, is developing and will demonstrate a novel secure-by-design, endpoint-to-core IoT platform for large-scale networks of low-power resource-constrained devices. This novel solution will help to keep and monitor IoT devices and the data they create secure, whilst connected to the network.

Smart cities can help to

  • Reduce the levels of carbon emissions, e.g. Currently, the transport sector makes up 14% of global greenhouse emissions. As more people gravitate towards them, the population of cities are set to grow to over two thirds of the world’s population by 2050. Transportation in urban areas will increase which, if not addressed, will result in a continued rise in emissions. IoT data to improve traffic management, shared transport and improved parking can assist a more efficient movement of a city’s residents and visitors.
  • Protect communities & natural habitats with improved environmental monitoring, optimised services like waste collection and by using smart building solutions
  • Improve the efficient use of energy and water resources contributing to a more sustainable society.
  • Establish safe and secure cross-operational data-sharing to bring disparate stakeholders together to deliver better outcomes for its inhabitants.

To perform these tasks effectively, smart cities will have millions of connected IoT devices but crucially they cannot be implemented fully without effective and robust cybersecurity.

As part of the UKRI’s Strategic Priorities Fund and of the Security of Digital Technologies at the Periphery (SDTAP) programme, SYNERGIA is addressing the challenge of a near-to-market secure and energy-efficient IoT system in resource constrained environments such as smart cities.  Incorporating, utilising and combining technologies in a novel way, the SYNERGIA platform, supports the new NCSC’s “Connected Places Cyber Security Principles” to enable key stakeholders to enhance the quality of living for its citizens, improve the co-operation between siloed sectors and to achieve those targets promised at this year’s conference.

Posted in News

SYNERGIA meets SDTaP’s IAC

By Theo Spyridopoulos: On the 15th of July, the Industrial Advisory Committee of the Security of Digital Technology at the Periphery (SDTaP) programme met, to hear updates from the three Demonstrator projects in Round 1:

  • i-TRACE: IoT Transport Assured for Critical Environments, a collaboration between the University of Warwick, Cisco, BT, Senseon, and Costain working with Artificial Intelliegence and Distributed Ledger technologies.
  • Secure-CAVs: The world’s first on-chip and in-life monitoring solution to rapidly detect cyber security threats in Connected and Autonomous Vehicles (CAVs), a collaboration between the Coventry, Southampton, Siemens, and Copper Horse.
  • ManySecured: Collaborative development of Secure IoT Gateways & Routers, a collaboration between Cisco, NquiringMinds, the University of Oxford, and our friends at the IoT Security Foundation.

and from Round 2:

  • SYNERGIA: Secure bY desigN End to end platfoRm for larGe scale resource constrained Iot Applications, a collaboration between Toshiba’s Bristol R&D Lab, Configured Things, Ioetec, MAC Ltd, and Smartia.

In addition, we heard from two projects, led by PETRAS researchers, funded under SDTaP’s commercialisation stream through CyberASAP (Cyber Security Academic Startup Accelerator Programme), the only accelerator programme in the cybersecurity ecosystem for pre-seed funding:

  1. TAIMAS: Timing Anomalies as an Indicator of Mal-Intervention in Automation Systems (UCL and CUBE 2 Ltd in Worthing)
  2. THuVA: Improving Security with Techno-Human Vulnerability Analysis (UCL)

SYNERGIA falls under Theme 2 “Secure and energy-efficient IoT systems in resource-constrained environments” of InnovateUK’s “Demonstrators addressing cyber security challenges in the Internet of Things” round 2 call and focuses on end-to-end cyber security for IoT systems with resource-constrained devices. It involves Involves AI as part of the security detection and mitigation mechanism at the Edge and plans to demonstrate the results in a real environment based on an existing Edge IoT platform. Similar challenges and areas of interest, especially in the field of AI at the Edge Gateway and Secure Configuration Management of thousands of IoT devices at the Edge, were also identified during the meeting. Project TAIMAS in particular, uses autoencoders for anomaly detection to perform intrusion detection in Building Automation Systems in a similar way to us. In SYNERGIA we push the detection to the Edge providing a human-in-the-loop under a Federated Learning Architecture to improve the model’s performance in case of low confidence outputs.

SYNERGIA focuses on a secure-by-design end-to-end platform for large scale resource-constrained IoT applications. We follow a three-tier architecture that includes i) the resource-constrained Endpoint Tier where battery-powered sensor devices are scattered in the field, ii) the Edge Tier that is geographically located close to the Endpoints and is responsible for collecting the sensor data and provide processing capabilities used for data analytics and system configuration management at the Edge and iii) the Back End Tier that is responsible for aggregating the processed data from the Edge Tier and providing a User Interface to End-users.

To inform the design of our security solutions, we conducted a threat analysis for the whole end-to-end system based on NIST’s threat modelling process in the 800-30 special publication. The main threats we are interested in revolve around unauthorised/malevolent users, services and devices trying to access or disrupt our system, targeting the Endpoint and Edge Tiers. To address these threats, we develop a series of security solutions operating at the two Tiers.

Similarly to the TAIMAS project, SYNERGIA uses an autoencoder running at the Edge to model the Edge device’s normal behaviour and detect abnormal behaviours. To improve the model’s performance, we use a human-in-the-loop approach under a Federated Learning architecture, providing a user interface for security experts to extract system data corresponding to low confidence model inferences for external analysis and data labelling. We also employ AI deployed at the Edge to detect malicious drifts in the data collected from the Endpoint devices.

A point raised during the meeting was the challenge of configuring and managing thousands of Endpoint devices scattered in the field; Intel has faced this issue with IoT deployments in the US. The existence of multiple actors and devices with different roles and owners respectively requires dynamic configuration management and control of the IoT. Providing this closer to the Endpoint Tier improves scalability as well as security and user privacy. In SYNERGIA, we address this challenge by delivering secure configuration and management of Endpoints, as well as secure Endpoint data processing through signed data flows deployed at the Edge.

SYNERGIA security is targeted at multiple resource constrained IoT for Smart Cities applications, and will demonstrate the solutions developed in just one particular Use Case: securing “Multi-tenancy Smart Buildings”. Working with Oxford Innovation (https://oxin.co.uk/), a number of Edge nodes and Endpoint sensors will be installed in the Future Space multi-tenancy building http://www.futurespacebristol.co.uk providing environmental monitoring, weather monitoring, green energy, and access control services etc. Synergia’s solutions will allow the building operator to deploy solutions around Variable billing based on room utilisation, heating, cooling etc. and also allow users a “Bring your own IoT device” policy. Furthermore, it will enable space users to ensure compliance with investors’ Environmental, Social and Corporate Governance policies.

Posted in News

Quick Facts

Funder: Innovate UK
Project Cost: £2.2M
Total Funding: £1.6M