<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.nexusforum.martel-innovate.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Admin</id>
	<title>NexusForum.EU Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.nexusforum.martel-innovate.com/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Admin"/>
	<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php/Special:Contributions/Admin"/>
	<updated>2026-04-15T22:45:28Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.43.0</generator>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Research_and_Innovation_Actions&amp;diff=258</id>
		<title>Research and Innovation Actions</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Research_and_Innovation_Actions&amp;diff=258"/>
		<updated>2026-03-11T09:56:31Z</updated>

		<summary type="html">&lt;p&gt;Admin: Protected &amp;quot;Research and Innovation Actions&amp;quot; ([Edit=Allow only administrators] (indefinite) [Move=Allow only administrators] (indefinite))&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an overview of the Research and Innovation Actions funded by the European Union Horizon 2020 and Horizon Europe, with specific reference to the projects under the EUCloudEdgeIoT umbrella.  &lt;br /&gt;
&lt;br /&gt;
The project listing is constantly updated, and will be linked to external repositories to further expand the knowledge base available in this Wiki.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[ItemType::EU Project]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?CORDIS URL&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Cloud-Edge_Use_Cases&amp;diff=257</id>
		<title>Cloud-Edge Use Cases</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Cloud-Edge_Use_Cases&amp;diff=257"/>
		<updated>2025-11-18T13:57:31Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[Working group name::Cloud-Edge Use Cases]] [[ItemType::Working group]]==&lt;br /&gt;
[[Description::The Cloud-Edge Use Cases Working Group deals with use cases and applications that could take advantage of the continuum, such as Mobility, Transport and Travel, Energy, Manufacturing and Industry 4.0/5.0, Health, Infrastructures, Smart Buildings &amp;amp; Cities, Tourism &amp;amp; Cultural Heritage, Agriculture &amp;amp; Environment, and Media. This Working Group will contribute to the initial roll-out of next generation use cases as part of a first industrial deployment with European wide scale, showcasing data processing in different sectors to verify functionality, high scalability, interoperability, portability, interconnectivity and compatibility. ]]&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Working group co-leader::Giovanni Frattini]] and [[Working group co-leader::Carlos Enrique Palau Salvador]].&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=NexusForum.EU_Working_Groups&amp;diff=256</id>
		<title>NexusForum.EU Working Groups</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=NexusForum.EU_Working_Groups&amp;diff=256"/>
		<updated>2025-11-18T13:56:51Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Cloud-Edge Use Cases */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== &#039;&#039;&#039;NexusForum.EU Working Groups&#039;&#039;&#039; ===&lt;br /&gt;
The main objective of the Working Groups is to &#039;&#039;&#039;collect contributions and feedback&#039;&#039;&#039; to the roadmap from relevant &#039;&#039;&#039;EU industry experts and researchers&#039;&#039;&#039;.  The  NexusForum.EU thematic Working Groups are based on the main sections of the Research and Innovation roadmap. These working groups are aligned with two European strategic initiatives: the European Alliance for Industrial Data, Edge and Cloud and the Important Project of Common European Interest on Next Generation Cloud Infrastructure and Services (IPCEI-CIS).&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Working Groups&#039;&#039;&#039; are organised in three main categories: &#039;&#039;&#039;European Alliance for Industrial Data, Edge and Cloud, IPCEI-CIS&#039;&#039;&#039; and &#039;&#039;&#039;International cooperation.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;European Alliance for Industrial Data, Edge and Cloud&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Sovereignty &amp;amp; Open Source]]&#039;&#039;&#039; =====&lt;br /&gt;
The Sovereignty Working Group and Open Source´s scope is to focus on moving towards European digital sovereignty, bolstering European digital capabilities and skills development related to computing technologies. &lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Alberto P. Martí]] and [[Sachiko Muto]].  &lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Sustainability]]&#039;&#039;&#039; =====&lt;br /&gt;
The Sustainability Working Group is centered around Data Centre energy-/resource-efficiency, efficiency metrics, circular economy in the data center industry and data platforms to enable decarbonization.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Antje Raetzer Scheibe]] and [[Jon Summers]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Interoperability]]&#039;&#039;&#039; =====&lt;br /&gt;
The Interoperability Working Group on Interoperability is set to focus on APIs, standards for interoperability, meta-orchestration and federation; federation of distributed cloud and edge computing resources, abstraction layers and standardization for meta-orchestration and workload optimization in multi-provider federations. And finally, on interoperability across cloud\edge platforms, providers, highlighting bare-meta, IaaS, PaaS layers. &lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Gorka Benguria]] and [[Lukas Rybok]]. &lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Cybersecurity]]&#039;&#039;&#039; =====&lt;br /&gt;
The Cybersecurity Working Group´s scope is to work on the themes of zero trust, identity management, privacy, end-to-end encryption/confidentiality, public key infrastructure, security protocols and standards and risk assessment.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Iraklis Symeonidis]] and [[Arthur van der Wees]].&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;IPCEI-CIS&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Cloud-Edge Use Cases]]&#039;&#039;&#039; =====&lt;br /&gt;
The Cloud-Edge Use Cases Working Group deals with use cases and applications that could take advantage of the continuum, such as Mobility, Transport and Travel, Energy, Manufacturing and Industry 4.0/5.0, Health, Infrastructures, Smart Buildings &amp;amp; Cities, Tourism &amp;amp; Cultural Heritage, Agriculture &amp;amp; Environment, and Media. This Working Group will contribute to the initial roll-out of next generation use cases as part of a first industrial deployment with European wide scale, showcasing data processing in different sectors to verify functionality, high scalability, interoperability, portability, interconnectivity and compatibility.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Giovanni Frattini]] and [[Carlos Enrique Palau Salvador]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[AI for Cloud-Edge]]&#039;&#039;&#039; =====&lt;br /&gt;
The AI for Cloud-Edge Working Group works on Meta-orchestration, federation of multi-cloud, IaaS, integration monitoring etc. of continuum, everything about OS and virtualization, containers, hypervisors, virtual networks, virtual storage. The scope will extend to services for serverless services spanning edge-cloud-HPC, service management, application lifecycle orchestration, data services and to the development of infrastructure related services to run on the multi-provider cloud-edge continuum is the basis for real time data services with ultra-low latency and the load balancing for optimised utilization. This will enable sorting, interpreting and prioritizing the storage and processing capabilities of large amounts of data in advance as close as possible to the place of origin and/or consumption of that data.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Antonio Álvarez]] and [[Ian Marsh]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Cloud-Edge for AI]]&#039;&#039;&#039; =====&lt;br /&gt;
The Cloud-Edge for AI Working Group works on Dataspaces, Data exchange (advanced capabilities), AI tools for FL and lifecycle of AI models, support data-driven applications, digital twins, application deployment, things that need to run on a cloud-edge infrastructure, require complex orchestration, identity and observability across multi-provider continuum. Furthermore, the scope of the WG consists in providing integrated services such as application lifecycle management to build, deploy and maintain apps all over the cloud-edge continuum – platform services -; data management to ease data ingestion, transformation and analysis in a multi-provider, federated environment in accordance with European regulation – data platform; and innovative data processing leveraging AI and ML – smart processing services.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Björn Forsberg]] and [[Antal Kuthy]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Telco Cloud-Edge]]&#039;&#039;&#039; =====&lt;br /&gt;
The Working Group on Telco Cloud-Edge deals with Open reference architectures for open cloud and edge infrastructure, standards for data centers, AI-based predictive maintenance of datacenters and cloud &amp;amp; edge infrastructure, networking, time synchronization, requirements across providers, security, automated operations, APIs for bare-metal as a service. Furthermore, it will work on optimized data centre design for cloud and edge, advanced simulation and prediction capabilities , security and accessibility of physical infrastructure, open hardware, operating systems, operation management &amp;amp; monitoring, connectivity, network orchestration with multi-cloud orchestration, edge infrastructure deployment, network functions at the edge, connectivity between cloud and edge, edge connectivity at scale. The Working Group will contribute to the setting up of an appropriate and supported next generation infrastructure to manage the technological complexity of the meshed continuum. Lastly, its aim is to develop and set up physical and logical linking of networks including integrated smart network services for the cloud-edge continuum. This will enable the entire network to combine cloud-edge computing processes and data transfer throughout the EU.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[David Artuñedo]] and [[Anders Lindgren]].&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;International cooperation&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[EU-Japan Cooperation]]&#039;&#039;&#039; =====&lt;br /&gt;
The EU-Japan Cooperation Working Group is meant to foster international collaboration based on this future association agreements of the Horizon Europe programme. This Working Group is an opportunity for experts in the industry and research fields to discuss any complementary aspects that can result in better cooperation between the European Union and Japan in terms of technical capabilities, policy and legal aspects.&lt;br /&gt;
&lt;br /&gt;
Discussion evolve around, for example: what are key priorities towards which both the EU and Japan are willing to commit ? In which domains can effective cooperation take place?&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Wiktoria Bochenska]] and [[Kazuyuki Shimizu]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[EU-Korea Cooperation]]&#039;&#039;&#039; =====&lt;br /&gt;
The EU-Korea Cooperation Working Group is meant to foster international collaboration based on this future association agreements of the Horizon Europe programme. This Working Group is an opportunity for experts in the industry and research fields to discuss any complementary aspects that can result in better cooperation between the European Union and the Republic of South Korea in terms of technical capabilities, policy and legal aspects. Discussion evolve around, for example: what are key priorities towards which both the EU and Korea are willing to commit? In which domains can effective cooperation take place?&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=NexusForum.EU_Working_Groups&amp;diff=255</id>
		<title>NexusForum.EU Working Groups</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=NexusForum.EU_Working_Groups&amp;diff=255"/>
		<updated>2025-11-18T13:55:31Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Cloud-Edge Use Cases */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== &#039;&#039;&#039;NexusForum.EU Working Groups&#039;&#039;&#039; ===&lt;br /&gt;
The main objective of the Working Groups is to &#039;&#039;&#039;collect contributions and feedback&#039;&#039;&#039; to the roadmap from relevant &#039;&#039;&#039;EU industry experts and researchers&#039;&#039;&#039;.  The  NexusForum.EU thematic Working Groups are based on the main sections of the Research and Innovation roadmap. These working groups are aligned with two European strategic initiatives: the European Alliance for Industrial Data, Edge and Cloud and the Important Project of Common European Interest on Next Generation Cloud Infrastructure and Services (IPCEI-CIS).&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Working Groups&#039;&#039;&#039; are organised in three main categories: &#039;&#039;&#039;European Alliance for Industrial Data, Edge and Cloud, IPCEI-CIS&#039;&#039;&#039; and &#039;&#039;&#039;International cooperation.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;European Alliance for Industrial Data, Edge and Cloud&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Sovereignty &amp;amp; Open Source]]&#039;&#039;&#039; =====&lt;br /&gt;
The Sovereignty Working Group and Open Source´s scope is to focus on moving towards European digital sovereignty, bolstering European digital capabilities and skills development related to computing technologies. &lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Alberto P. Martí]] and [[Sachiko Muto]].  &lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Sustainability]]&#039;&#039;&#039; =====&lt;br /&gt;
The Sustainability Working Group is centered around Data Centre energy-/resource-efficiency, efficiency metrics, circular economy in the data center industry and data platforms to enable decarbonization.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Antje Raetzer Scheibe]] and [[Jon Summers]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Interoperability]]&#039;&#039;&#039; =====&lt;br /&gt;
The Interoperability Working Group on Interoperability is set to focus on APIs, standards for interoperability, meta-orchestration and federation; federation of distributed cloud and edge computing resources, abstraction layers and standardization for meta-orchestration and workload optimization in multi-provider federations. And finally, on interoperability across cloud\edge platforms, providers, highlighting bare-meta, IaaS, PaaS layers. &lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Gorka Benguria]] and [[Lukas Rybok]]. &lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Cybersecurity]]&#039;&#039;&#039; =====&lt;br /&gt;
The Cybersecurity Working Group´s scope is to work on the themes of zero trust, identity management, privacy, end-to-end encryption/confidentiality, public key infrastructure, security protocols and standards and risk assessment.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Iraklis Symeonidis]] and [[Arthur van der Wees]].&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;IPCEI-CIS&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Cloud-Edge Use Cases]]&#039;&#039;&#039; =====&lt;br /&gt;
The Cloud-Edge Use Cases Working Group deals with use cases and applications that could take advantage of the continuum, such as Mobility, Transport and Travel, Energy, Manufacturing and Industry 4.0/5.0, Health, Infrastructures, Smart Buildings &amp;amp; Cities, Tourism &amp;amp; Cultural Heritage, Agriculture &amp;amp; Environment, and Media. This Working Group will contribute to the initial roll-out of next generation use cases as part of a first industrial deployment with European wide scale, showcasing data processing in different sectors to verify functionality, high scalability, interoperability, portability, interconnectivity and compatibility.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Carlos Enrique Palau Salvador]] and [[Dimosthenis Kyriazis]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[AI for Cloud-Edge]]&#039;&#039;&#039; =====&lt;br /&gt;
The AI for Cloud-Edge Working Group works on Meta-orchestration, federation of multi-cloud, IaaS, integration monitoring etc. of continuum, everything about OS and virtualization, containers, hypervisors, virtual networks, virtual storage. The scope will extend to services for serverless services spanning edge-cloud-HPC, service management, application lifecycle orchestration, data services and to the development of infrastructure related services to run on the multi-provider cloud-edge continuum is the basis for real time data services with ultra-low latency and the load balancing for optimised utilization. This will enable sorting, interpreting and prioritizing the storage and processing capabilities of large amounts of data in advance as close as possible to the place of origin and/or consumption of that data.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Antonio Álvarez]] and [[Ian Marsh]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Cloud-Edge for AI]]&#039;&#039;&#039; =====&lt;br /&gt;
The Cloud-Edge for AI Working Group works on Dataspaces, Data exchange (advanced capabilities), AI tools for FL and lifecycle of AI models, support data-driven applications, digital twins, application deployment, things that need to run on a cloud-edge infrastructure, require complex orchestration, identity and observability across multi-provider continuum. Furthermore, the scope of the WG consists in providing integrated services such as application lifecycle management to build, deploy and maintain apps all over the cloud-edge continuum – platform services -; data management to ease data ingestion, transformation and analysis in a multi-provider, federated environment in accordance with European regulation – data platform; and innovative data processing leveraging AI and ML – smart processing services.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Björn Forsberg]] and [[Antal Kuthy]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Telco Cloud-Edge]]&#039;&#039;&#039; =====&lt;br /&gt;
The Working Group on Telco Cloud-Edge deals with Open reference architectures for open cloud and edge infrastructure, standards for data centers, AI-based predictive maintenance of datacenters and cloud &amp;amp; edge infrastructure, networking, time synchronization, requirements across providers, security, automated operations, APIs for bare-metal as a service. Furthermore, it will work on optimized data centre design for cloud and edge, advanced simulation and prediction capabilities , security and accessibility of physical infrastructure, open hardware, operating systems, operation management &amp;amp; monitoring, connectivity, network orchestration with multi-cloud orchestration, edge infrastructure deployment, network functions at the edge, connectivity between cloud and edge, edge connectivity at scale. The Working Group will contribute to the setting up of an appropriate and supported next generation infrastructure to manage the technological complexity of the meshed continuum. Lastly, its aim is to develop and set up physical and logical linking of networks including integrated smart network services for the cloud-edge continuum. This will enable the entire network to combine cloud-edge computing processes and data transfer throughout the EU.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[David Artuñedo]] and [[Anders Lindgren]].&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;International cooperation&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[EU-Japan Cooperation]]&#039;&#039;&#039; =====&lt;br /&gt;
The EU-Japan Cooperation Working Group is meant to foster international collaboration based on this future association agreements of the Horizon Europe programme. This Working Group is an opportunity for experts in the industry and research fields to discuss any complementary aspects that can result in better cooperation between the European Union and Japan in terms of technical capabilities, policy and legal aspects.&lt;br /&gt;
&lt;br /&gt;
Discussion evolve around, for example: what are key priorities towards which both the EU and Japan are willing to commit ? In which domains can effective cooperation take place?&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Wiktoria Bochenska]] and [[Kazuyuki Shimizu]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[EU-Korea Cooperation]]&#039;&#039;&#039; =====&lt;br /&gt;
The EU-Korea Cooperation Working Group is meant to foster international collaboration based on this future association agreements of the Horizon Europe programme. This Working Group is an opportunity for experts in the industry and research fields to discuss any complementary aspects that can result in better cooperation between the European Union and the Republic of South Korea in terms of technical capabilities, policy and legal aspects. Discussion evolve around, for example: what are key priorities towards which both the EU and Korea are willing to commit? In which domains can effective cooperation take place?&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Carlos_Enrique_Palau_Salvador&amp;diff=254</id>
		<title>Carlos Enrique Palau Salvador</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Carlos_Enrique_Palau_Salvador&amp;diff=254"/>
		<updated>2025-11-18T13:55:26Z</updated>

		<summary type="html">&lt;p&gt;Admin: Created page with &amp;quot;Carlos Enrique Palau Salvador is a full professor at the Universitat Politècnica de València, where he earned both his Telecommunications Engineering degree and his PhD. His research centers on distributed real-time systems, network security, IoT, and large-scale communication infrastructures, all closely tied to cloud-edge computing. He has led and participated in major Spanish and European R&amp;amp;D projects involving advanced networking, 5G, industrial AI, and smart data...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Carlos Enrique Palau Salvador is a full professor at the Universitat Politècnica de València, where he earned both his Telecommunications Engineering degree and his PhD. His research centers on distributed real-time systems, network security, IoT, and large-scale communication infrastructures, all closely tied to cloud-edge computing. He has led and participated in major Spanish and European R&amp;amp;D projects involving advanced networking, 5G, industrial AI, and smart data platforms. With more than 60 journal articles, 200 conference papers, and a senior membership at IEEE, he is a recognized contributor in the field. &lt;br /&gt;
&lt;br /&gt;
==== Involvement in Working Groups ====&lt;br /&gt;
Cloud-Edge Use Cases&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Cloud-Edge_Use_Cases&amp;diff=253</id>
		<title>Cloud-Edge Use Cases</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Cloud-Edge_Use_Cases&amp;diff=253"/>
		<updated>2025-11-18T13:54:40Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[Working group name::Cloud-Edge Use Cases]] [[ItemType::Working group]]==&lt;br /&gt;
[[Description::The Cloud-Edge Use Cases Working Group deals with use cases and applications that could take advantage of the continuum, such as Mobility, Transport and Travel, Energy, Manufacturing and Industry 4.0/5.0, Health, Infrastructures, Smart Buildings &amp;amp; Cities, Tourism &amp;amp; Cultural Heritage, Agriculture &amp;amp; Environment, and Media. This Working Group will contribute to the initial roll-out of next generation use cases as part of a first industrial deployment with European wide scale, showcasing data processing in different sectors to verify functionality, high scalability, interoperability, portability, interconnectivity and compatibility. ]]&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Working group co-leader::Carlos Enrique Palau Salvador]] and [[Working group co-leader::Dimosthenis Kyriazis]].&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Cloud-Edge_Use_Cases&amp;diff=252</id>
		<title>Cloud-Edge Use Cases</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Cloud-Edge_Use_Cases&amp;diff=252"/>
		<updated>2025-11-18T13:54:15Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[Working group name::Cloud-Edge Use Cases]] [[ItemType::Working group]]==&lt;br /&gt;
[[Description::The Cloud-Edge Use Cases Working Group deals with use cases and applications that could take advantage of the continuum, such as Mobility, Transport and Travel, Energy, Manufacturing and Industry 4.0/5.0, Health, Infrastructures, Smart Buildings &amp;amp; Cities, Tourism &amp;amp; Cultural Heritage, Agriculture &amp;amp; Environment, and Media. This Working Group will contribute to the initial roll-out of next generation use cases as part of a first industrial deployment with European wide scale, showcasing data processing in different sectors to verify functionality, high scalability, interoperability, portability, interconnectivity and compatibility. ]]&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Working group co-leader::Giovanni Frattini|Working group co-leader::Carlos Enrique Palau Salvador]] and [[Working group co-leader::Dimosthenis Kyriazis]].&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Main_Page&amp;diff=251</id>
		<title>Main Page</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Main_Page&amp;diff=251"/>
		<updated>2025-10-29T13:37:20Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Contributing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[File:EUCEIlogo.png|thumb]]&lt;br /&gt;
&lt;br /&gt;
== &amp;lt;strong&amp;gt;Welcome to the EUCloudEdgeIoT Wiki&amp;lt;/strong&amp;gt; ==&lt;br /&gt;
This Wiki collects detailed information on the NexusForum.EU Working Groups, including the members taking part, research papers and references to delve deeper.&lt;br /&gt;
&lt;br /&gt;
=== What can I do on the EUCloudEdgeIoT Wiki? ===&lt;br /&gt;
The Wiki is a tool to explore the activities and outputs of the [https://eucloudedgeiot.eu/ EUCloudEdgeIoT] initiative. &lt;br /&gt;
&lt;br /&gt;
You can explore:&lt;br /&gt;
&lt;br /&gt;
* the [[NexusForum.EU Working Groups]]&lt;br /&gt;
* the [[Research and Innovation Actions]] funded by the European Commission&#039;s [https://research-and-innovation.ec.europa.eu/funding/funding-opportunities/funding-programmes-and-open-calls/horizon-2020_en Horizon 2020] and [https://research-and-innovation.ec.europa.eu/funding/funding-opportunities/funding-programmes-and-open-calls/horizon-europe_en Horizon Europe] programmes under the [https://eucloudedgeiot.eu/ EUCloudEdgeIoT] umbrella&lt;br /&gt;
* the [[IPCEI-CIS Reference Architecture]]&lt;br /&gt;
* a repository of [[Researchers]] active in the Cloud, Edge and IoT domains in Europe&lt;br /&gt;
&lt;br /&gt;
=== Contributing ===&lt;br /&gt;
If you are a member of the wider [https://eucloudedgeiot.eu/ EUCloudEdgeIoT] community, you will soon be able to provide details on the Research and Innovation Action you are in, share research on Edge, Cloud and IoT, comment and update the content available in the Wiki.&lt;br /&gt;
&lt;br /&gt;
We will enable registrations soon.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Researchers&amp;diff=250</id>
		<title>Researchers</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Researchers&amp;diff=250"/>
		<updated>2025-10-29T13:36:38Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page is under construction.&lt;br /&gt;
&lt;br /&gt;
You will soon find information on researchers active in the Cloud, Edge and IoT domains.&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=248</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=248"/>
		<updated>2025-10-22T13:16:18Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Layers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;8&amp;quot;|&#039;&#039;&#039;Management&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Logging&lt;br /&gt;
|Logging in cloud infrastructures refers to the systematic recording of events, transactions, and activities within the cloud environment. This process captures detailed logs of user actions, system operations, and data interactions, creating a repository of information that supports monitoring, auditing, troubleshooting, and security analysis. By maintaining comprehensive and accurate logs, cloud providers and users can trace system behaviors, detect anomalies, and swiftly respond to incidents. Effective logging not only aids in compliance with regulatory requirements but also enhances the overall transparency, reliability, and resilience of the cloud infrastructure.&lt;br /&gt;
|-&lt;br /&gt;
|Monitoring and Alerting&lt;br /&gt;
|Monitoring and alerting in cloud infrastructures involves continuous observation and real-time analysis of system performance, application behavior, and resource utilization. This process employs various tools and techniques to collect and analyze data from different components of the cloud environment, such as servers, networks, and applications. By setting static or AI-supported thresholds and rules, monitoring systems can detect anomalies, performance bottlenecks, and potential failures. When these conditions are met, alerting mechanisms are triggered to notify administrators and stakeholders promptly, enabling swift resolution and minimizing downtime. Effective monitoring and alerting are essential for maintaining the reliability, availability, and overall health of cloud services, ensuring optimal user experiences and adherence to service level agreements (SLAs). Logging, monitoring, and alerting are foundational, providing detailed records and continuous observation of system activities, which support troubleshooting, security analysis, performance optimization, and pro-active intervention. Coupled with alerting mechanisms, they enable real-time detection and swift resolution of anomalies and potential failures, ensuring high availability and reliability.Logging, monitoring, and alerting could use artifacts from the Data layer to implement its functionality.&lt;br /&gt;
|-&lt;br /&gt;
|Accounting and Charging&lt;br /&gt;
|Accounting and charging in cloud infrastructure and services refers to the systematic process of tracking and invoicing the usage of cloud services by users and applications. Accounting involves the continuous collection of data on resource consumption, such as CPU usage, memory allocation, storage, and network bandwidth. This data is then analyzed to generate detailed usage reports, which serve as the basis for customer billing. Charging systems apply predefined pricing models and rates to the metered data, ensuring accurate and transparent charges based on actual usage. This process not only provides customers with clear insights into their cloud expenditures but also enables providers to manage resources efficiently and optimize their service offerings. Effective accounting and charging are essential for tracking resource consumption and generating accurate accounting, fostering transparency. This is critical for maintaining financial accountability, fostering trust, and supporting the scalable and on-demand nature of cloud services. &lt;br /&gt;
|-&lt;br /&gt;
|Performance Management&lt;br /&gt;
|Performance Management in cloud infrastructures refers to the comprehensive process of defining, monitoring, and enforcing Service Level Agreements (SLAs) between cloud service providers and their customers. This applies also to internal performance goals the provider may set internally for its services. This involves setting clear expectations for service performance, availability, and support, and ensuring that these commitments are met consistently. SLA management includes tracking key performance indicators (KPIs), generating compliance reports, and addressing any deviations through corrective actions. Effective performance management not only enhances customer satisfaction and trust but also enables providers to maintain high standards of service quality and reliability, thereby fostering long-term business relationships and competitive advantage. The management of Service Level Agreements (SLAs) ensures that performance and availability commitments are consistently met, enhancing customer satisfaction and trust.&lt;br /&gt;
|-&lt;br /&gt;
|Fault Management&lt;br /&gt;
|Fault Management in cloud infrastructures involves systematic detection, isolation, and resolution of faults or issues within the cloud environment. This process includes identifying potential failures, diagnosing their root causes, and implementing corrective actions to restore normal operations. Fault management leverages workflow and ticketing systems combined with automated tools and techniques to monitor system components, analyze error logs, and trigger alerts for anomalies. By proactively managing faults, cloud providers can minimize downtime, enhance system reliability, and ensure continuous service delivery, ultimately contributing to the overall robustness and resilience of the cloud infrastructure. Fault management processes detect, diagnose, and resolve issues promptly, contributing to the resilience and robustness of the cloud environment.&lt;br /&gt;
|-&lt;br /&gt;
|Catalog/Repository Management&lt;br /&gt;
|It provides inventory information, in different layers, about resources used and available and services available and deployed to take decisions. For instance, list of k8s clusters available (with characteristics), list of applications deployed (in which cluster). In addition, Catalogs allow to implement a model driven approach for services and resources.&lt;br /&gt;
|-&lt;br /&gt;
|Operation Automation&lt;br /&gt;
|This component provides automation for integration, delivery, verification, testing, optimization and other processes in cloud edge environments. It allows us to manage distributed software and infrastructure in an automated way (by code, by software, scripting, declaratively) reducing human errors, making implementations more uniform and predictable and facilitating the reconciliation and recovery to a working configuration after a misconfiguration, disaster or failure. It is usually referred to as XOps (DevOps, MLOps, AIOps, FinOps...). &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|&#039;&#039;&#039;Security and Compliance&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Identity and Access Management&lt;br /&gt;
|In cloud infrastructures, Identity and Access Management (IAM) is a critical framework of policies and technologies that ensure the correct permissions and access rights are assigned to users and devices. This system provides operators with the capability to enforce precise control over resource access, defining who can access specific platform resources and under what conditions. IAM facilitates the automation of user provisioning, the enforcement of robust authentication mechanisms, and the adherence to the principle of least privilege. Coupled with other security measures such as encryption and audit logging, IAM significantly enhances the overall security posture, safeguarding against unauthorized access and potential breaches within the cloud environment. &lt;br /&gt;
|-&lt;br /&gt;
|Identity Management&lt;br /&gt;
|Identity Management (IdM) in cloud infrastructures focuses on the administration and management of user identities and their associated access rights. It involves the creation, maintenance, and deletion of user accounts, as well as the authentication of user identities and the authorization of their actions within the cloud environment. IdM ensures that only authorized users can access specific resources, while maintaining detailed records of user activities and access patterns. By implementing robust IdM solutions, organizations can enhance security, streamline access management, and achieve compliance with regulatory requirements, thereby protecting sensitive data and preventing unauthorized access. &lt;br /&gt;
|-&lt;br /&gt;
|Key Management Service&lt;br /&gt;
|Key Management Service (KMS) is an essential component within cloud infrastructures that provides centralized control over the cryptographic keys used to secure data. KMS facilitates the creation, management, and deletion of encryption keys, ensuring that data remains protected both at rest and in transit. By automating key rotation, enforcing access controls, and integrating seamlessly with other cloud services, KMS enhances the security framework by preventing unauthorized access to sensitive information and simplifying compliance with industry standards. This service is vital for maintaining data confidentiality, integrity, and availability within a cloud environment, thereby reinforcing the overall security posture of the infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Audit Log&lt;br /&gt;
|Audit logs are an indispensable component of cloud infrastructures, serving as detailed records of all activities and access events within the environment. These logs capture a comprehensive trail of user actions, system events, and data access, providing essential visibility into the operation and security of the infrastructure. By systematically documenting every interaction, audit logs enable security professionals to perform thorough investigations, identify potential security incidents, and ensure compliance with regulatory standards. The granular information contained in audit logs supports proactive threat detection, forensic analysis, and the continuous improvement of security measures, thereby bolstering the overall integrity and accountability of the cloud environment. &lt;br /&gt;
|-&lt;br /&gt;
|Account Management&lt;br /&gt;
|In cloud infrastructures, Account Management is the systematic process of overseeing and controlling tenant accounts and their associated privileges. This involves the creation, modification, and deletion of accounts, as well as the assignment and revocation of access rights based on tenant entitlements. By maintaining detailed records of account activities and regularly auditing account permissions, organizations can enhance security, prevent unauthorized access, and ensure compliance with regulatory standards. This proactive management of accounts is crucial for maintaining the integrity and security of the cloud environment. &lt;br /&gt;
|-&lt;br /&gt;
|Service Compliance Verification&lt;br /&gt;
|Service Compliance Verification in cloud infrastructures is the process of ensuring that all cloud services and their operations adhere to predefined regulatory, security, and organizational standards. This involves conducting regular assessments and audits to verify that the services comply with laws such as GDPR, HIPAA, and industry-specific regulations. Compliance verification includes monitoring service configurations, access controls, and data handling practices to identify and rectify any deviations from the required standards. By implementing comprehensive compliance verification measures, organizations can mitigate legal and security risks, demonstrate due diligence, and maintain trust with stakeholders and customers. This systematic approach is essential for maintaining the integrity, security, and legality of cloud operations. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|&#039;&#039;&#039;Sustainability&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Benchmark, Metrics and Monitoring &lt;br /&gt;
|It provides comparisons of energy consumption among application instances, services, sites or hardware elements to identify bad or good performers. It also compares energy consumption needs of applications/workloads with low-cost/renewable energy availability to help in identifying potential energy optimizations based on moving workload to places where energy is free or low-cost. It tracks in real-time the energy consumption of workloads, hardware elements (servers, storage, networking), sites and services, and the energy generation of associated renewable energy sources. It also monitors the resource consumption per service or application, the waste and heat generation, and other potential environmental protection measures. This component is setting the standards for monitoring services required to support environmental targets and energy cost reduction as well as Financial Optimization of 8ra services and infrastructure These data are subject to be used by the Optimization component for forecasting, planning and decision making around energy and resource consumption. It identifies when applications, hardware elements, nodes or facilities exceed certain thresholds in terms of energy or resource consumption, triggering alerts. The Monitoring component also tracks the availability and cost of energy in energy generation locations, identifying those where free or low-cost energy is available. These are energy producers or consumers with excess energy capacity that otherwise would be wasted if not consumed, like wind turbines, solar power plants and data centers. The component also gives recommendations for data collection frameworks and tools.&lt;br /&gt;
|-&lt;br /&gt;
|Energy Consumption and carbon emission &lt;br /&gt;
|Optimization. This component implements mechanisms to optimize resource and energy consumption and carbon emission, for instance, scaling down the resources allocated to a certain application when they are not required, and it is not foreseen they will be in the short term. It is in charge of implementing a “green” operational mode for cloud-edge nodes and platforms. It provides recommendations for optimization of application and infrastructure setup based on data from benchmark and monitoring. This component may also provide recommendations with respect to application placement to reduce data transfer, optimize energy consumption or balance occupation among sites (reduce risk of congestion). At Data layer, it advises on data fusion and data reuse options to minimize redundant data exchange.&lt;br /&gt;
|-&lt;br /&gt;
|Sustainable Workload placement and scheduling&lt;br /&gt;
|It generates recommendations (conventional or AI based) of temporal and spatial workload movement to reduce energy costs and/or carbon emissions. Recommendations are usually oriented towards moving workloads temporarily to energy generation sites where energy is going to be wasted (free or low-cost). For this, it forecasts and matches workload requirements with server capacity at these sites with exceeding energy. It leverages algorithmic to maximize energy efficiency and resource optimization and minimize environmental impact. Workload placement considers also the amount of data communication and latency and cost of migration into the selection of the optimum location. The Scheduler plans and implements the recommendations of the Resource Optimization component, moving workloads temporarily to low-cost energy generation locations, and providing workload execution assurance.  The implementation is integrated into or used by a (de-)central Multi-Cloud Orchestrator, that is the component able to move workload between sites.&lt;br /&gt;
|-&lt;br /&gt;
|Renewable Energy Management &lt;br /&gt;
|It gives recommendations for near energy production (wind, sun) placement of datacenters and integration of battery storage systems to balance grid capacity. &lt;br /&gt;
|-&lt;br /&gt;
|Cooling and Heat Management &lt;br /&gt;
|It enhances energy and operational efficiency by integrating cooling technology and real-time optimization of cooling and workload.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|&#039;&#039;&#039;Application Layer&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Application Designer&lt;br /&gt;
|This component enables developers to design, create and customize applications using intuitive interfaces or predefined templates. It facilitates rapid development, integration and delivery of tailored applications using automated CI/CD practices (DevOps). The Application Designer facilitates the description of the application in terms of: set of application components it is made of and how they are connected (service function chain), runtime environment each of the application components will require, including the set of functions/services to support its execution, the attributes that may allow the selection of the computing node to host it (hardware requirements, latency, privacy, etc. ). &lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|The Application Packager supports the packaging of applications for its deployment in the Cloud-Edge continuum. It facilitates the automation of application deployment and update (DevOps, both traditional and AI-assisted), providing an integrated toolkit that enables quick, secure and innovative ways to deploy cloud-aware applications. It also provides tools for automatic verification and validation (CV/CT) of the application and its supply chain before its final packaging. &lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|This component provides the interface to invoke and use the applications contained in the catalog. It checks the identity and authenticates the user and checks his authorization to use the application before providing access to it. &lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|It tracks application usage and execution, monitors the performance and identifies abnormal behavior and suboptimal use of resources. &lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|It implements a directory of applications and functions that the providers have made available. They contain the characteristics of the application and the environment it requires for its execution (runtime, services, hardware characteristics). &lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|This component implements the accounting of application usage and provides online charging information for the customer to track application expenditure in real-time. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|&#039;&#039;&#039;Data Layer&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Data Pipelines&lt;br /&gt;
|This component provides the functionality for data collection, including the connectors to integrate with the data sources and the capabilities for data curation and pre-processing that ensure its quality and readiness for analytics, insight generation, training, modelling or inferencing phases. &lt;br /&gt;
|-&lt;br /&gt;
|Data Modelling&lt;br /&gt;
|This component enables data cataloguing to enable exposure and discovery at scale to easily search, find and browse data, over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Exposure&lt;br /&gt;
|The Data Exposure component provides customers with standard mechanisms and interfaces for safe and controlled access to data. It includes capabilities for making data offers and contract data acquisition, identity checking and data access authentication and authorization. &lt;br /&gt;
|-&lt;br /&gt;
|Data Policy Control&lt;br /&gt;
|Data Policy Control sets the required policies for data sharing, providing a safe, controlled and regulation-compliant environment for data exchange. It allows the data owner to manage the permissions to access its data: who can make it, at which conditions and for which purposes. &lt;br /&gt;
|-&lt;br /&gt;
|Data Catalog&lt;br /&gt;
|Data Catalog provides efficient storage and indexing of data to facilitate browsing, searching and finding data over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Federation&lt;br /&gt;
|Data Federation enables standard mechanisms and interfaces (connectors) for partnering in the provision of datasets, providing a unified view of data catalogs and databases from multiple data providers. This component enables real-time data exchange across companies using data mesh principles, connecting distributed and heterogeneous actors over the cloud-edge continuum, keeping data owners in full control of their data. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Data Federation capabilities should be designed consistently with the other federation capabilities described in this document.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|&#039;&#039;&#039;AI Layer&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Training&lt;br /&gt;
|This component facilitates the dynamic and adjustable training of AI models across cloud and edge environments, ensuring scalability, reduced latency, and optimized resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Inference&lt;br /&gt;
|The Inference components facilitate real-time deployment and execution of trained AI models on edge devices with efficient synchronization with the cloud for updates, monitoring, and enhancements. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Agent Manager&lt;br /&gt;
|The Cloud-Edge Agent Manager enables the deployment and management of agents and agentic workflows on edge and hybrid edge-cloud deployments creating an agentic mesh. &lt;br /&gt;
|-&lt;br /&gt;
|AI Model Catalog&lt;br /&gt;
|This component contains trained foundational models: LLMs, SLMs, multimodal LLMs, in multiple languages and managing multiple data types: text, images, video, code, etc. These models provide support for Natural Language Processing (NLP), Machine Translation (MT), speech processing, text analysis, information extraction, summarization or text and speech generation. They can be fine-tuned and adapted to specific use cases, using techniques like RAG, model quantization, pruning or distillation. The catalog contains multilingual and multimodal LLMs tailored to diverse EU languages, capable of understanding and processing diverse data types, including text, images, and multimedia. These models address the scarcity of generative AI solutions in non-English languages, ensuring semantic precision, completeness, and compliance with the AI Act. &lt;br /&gt;
|-&lt;br /&gt;
|Federated Learning&lt;br /&gt;
|AI workloads can be split across multiple nodes with central orchestration for scalability and efficiency (Distributed AI). AI Federation enables autonomous nodes to collaborate securely, ensuring privacy and sovereignty. Together, they balance task-sharing efficiency with autonomy. In distributed AI training, the AI model is generated at a central point based on the combination of models produced by different training agents distributed across a ecosystem of federated AI service providers or owners. The distributed training agents work locally on local datasets reducing the need to transfer data to a central location for training. This component allows to use and orchestrate AI resources across multiple providers to collaboratively perform a specific machine learning training task. It leverages a federated network of AI capabilities geographically distributed across the multi-provider Cloud Edge Continuum, enabling seamless resource sharing and scaling while maintaining sovereignty and compliance. It ensures efficient distribution of AI computational workloads, minimizes data movement, and facilitates parallel model training without requiring centralized data aggregation, thus preserving data privacy and autonomy while enhancing overall system performance. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Federated Learning capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|AI Explainability&lt;br /&gt;
|This Explainable AI component ensures transparency by providing interpretable insights into AI decision-making processes. It supports compliance, accountability, and trust by enabling users and regulators to understand, audit, and validate AI models while respecting privacy and data sovereignty. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;5&amp;quot;|&#039;&#039;&#039;Service Orchestration&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Service Orchestrator&lt;br /&gt;
|Service orchestration assures efficient tasks execution, load balancing and real-time operations. For example, it could communicate with the Multi-Cloud Orchestrator that manages the virtualized infrastructure layer offering a single unified environment for application development and monitoring. This allows applications and services to be deployed seamlessly across multiple platforms, optimizing resource allocation and reducing operational complexity. Alternatively, the Service Orchestrator may directly or indirectly interact with the underlying capabilities of the cloud platform or virtualization management layer to orchestrate workload execution.  The Service Orchestrator automates application and tenant deployment, and lifecycle management processes. By automating workflows (or service function chains), orchestration ensures that services communicate efficiently across the cloud-edge continuum. &lt;br /&gt;
|-&lt;br /&gt;
|Application Performance Management&lt;br /&gt;
|It monitors the performance and resource consumption of the application or service and communicates deviations from set thresholds or SLAs to the Service Orchestrator for this to take actions to recover a state that meets application requirements. It provides a unified view of states, including logging, monitoring, and alerting, for effective real-time application management and validation at runtime. &lt;br /&gt;
|-&lt;br /&gt;
|Application Repository&lt;br /&gt;
|This component tracks the applications and services that have been deployed and their configuration, the locations where the application and service components are installed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Service Federation&lt;br /&gt;
|This component interconnects the Service Orchestrator with those of other federated providers, enabling the deployment and execution of applications (service function chains) across multiple providers in a seamless way, interacting with a single provider. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Service Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;12&amp;quot;|&#039;&#039;&#039;Cloud Edge Platform&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cloud Orchestrator (PaaS)&lt;br /&gt;
|Multi-Cloud Orchestrator delivers a Platform as a Service (PaaS) service. A PaaS provides a complete application development and deployment environment in the cloud. With PaaS, customers can build, test, deploy, manage, and update applications quickly and efficiently, without worrying about the underlying infrastructure. It receives from the Service Orchestrator a request to deploy (or manage the lifecycle of) a certain application together with a descriptor (resource model) that defines the state the application needs for its execution (including runtime environment, services, data, application image and other attributes like area of service, performance...). MCO processes the state and takes actions to set it up and preserve it, by updating, upgrading or removing workloads and services, or rescaling or releasing resources. MCO works in close relationship with other components (PIM, VIP, MCM, Serverless orchestrator) to provide the virtual runtime environment defined for the application, the specific combination of bare metal, virtual machine, containers, serverless mechanisms it has been developed to run on, using the technologies over which it has been tested and certified. MCO also deploys and manages the lifecycle of essential tools and services such as middleware, development frameworks, databases, and business analytics, enabling organizations to streamline application development and drive innovation. A PaaS, managed by the MCO, offers scalability, high availability, and reduced time-to-market, allowing developers to focus on coding and application functionality while the MCO supports with infrastructure, security, and operational aspects. Based on certain attributes, like area of service and performance, the MCO may select the location(s) where to deploy the workload and the resources (physical and virtual) required at those location(s) to meet the desired state. This decision on application placement can also follow sustainability and privacy requirements. The MCO deploys the workload once the necessary resources are available, using the Workload Deployment Manager. The MCO also updates and removes workloads, rescaling or releasing the corresponding resources. This MCO description shows a decomposition of the functionality of a cloud-edge continuum workload management solution that may be implemented in many ways, combining or excluding some of its components in order to fit specific sector needs. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Connectivity Manager&lt;br /&gt;
|The Cloud Edge Connectivity Manager (CEC) implements and modifies the service function chain, or removes it, totally or partially, following the requests from a Service Orchestrator, to guarantee the connectivity between workloads that will enable the service delivery and the connectivity from the service user to the workloads implementing the service front-end. Connectivity is usually based on overlay and underlay components in each domain crossed by the traffic (e.g. WAN, data centers, etc.). The CEC manages the networking in the data center domain through the virtualization managers (VIM, CISM) or via specific NaaS interfaces. It manages the WAN connectivity using Cloud Networking services (via transport SDN Controllers) for the connection of different computing nodes. In addition, the CEC manages the complexity deriving from the need to ensure consistency between overlay and underlay networking solutions (for example adapting the networking between the data center fabric and the AAN connectivity). &lt;br /&gt;
|-&lt;br /&gt;
|Physical Infrastructure Manager&lt;br /&gt;
|The Physical Infrastructure Manager (PIM) monitors and manages a pool of physical resources (CPUs, storage, networking), and selects and prepares them (with the corresponding OS and necessary software) to allocate these resources to a virtual machine or container cluster. The PIM provides multiple physical infrastructure management functions, including physical resource provisioning and lifecycle management, physical resource inventory management or physical resource performance management. &lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cluster Manager&lt;br /&gt;
|The Multi-Cluster Manager (MCM) creates and configures container clusters both over bare metal and over virtual machines after a request from the MCO, offering a single interface to manage infrastructure from multiple providers and with multiple K8s distributions. The MCM provides open connectors/APIs to interact with the resources and k8s distributions offered by different providers (private &amp;amp; public) for cluster creation, configuration and monitoring, and keeps track of their evolution. The MCM may create a K8s cluster on bare metal (cluster nodes are servers) or on virtualization stack (cluster nodes are VMs), interacting with PIM or VIP respectively. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Platform Manager&lt;br /&gt;
|The Virtual Infrastructure Platform Manager (VIP) creates virtual machine clusters across several locations using the resources allocated by the PIM. The VIP is required when the service component to be deployed is a Virtualized Application or a Containerized Application, which runs over container clusters that make use of VMs (virtual machines). This component works on infrastructure and technology from different providers, enabling the Cloud Edge continuum to run on a diverse set of different virtualization solutions (VIMs, CISMs or any other future virtualization technology). &lt;br /&gt;
|-&lt;br /&gt;
|Workload Deployment Manager&lt;br /&gt;
|The Workload Deployment Manager (WDM) deploys software package(s) on top of an existing cluster(s) following MCO requests. It exposes a single interface to deploy software packages (i.e. via a helm chart or resource model declaration) on any K8s cluster (or alike) based on any distribution. The WDM provides the connectors/APIs to interact with existing clusters in different locations &amp;amp; technologies (K8s distributions) for application deployment and lifecycle management. This component can also deploy software packages directly on virtual machines (IaaS). &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Federation&lt;br /&gt;
|This component interconnects the Multi-Cloud Orchestrator with the ones of other federated providers, enabling the customer the use of cloud edge computing services (IaaS, CaaS, PaaS, Serverless, NaaS...) across multiple providers in a seamless way, interacting with a single provider. This platform federation provides seamless integration and collaboration between multiple cloud platform providers, enabling interoperability, resource sharing, and unified lifecycle management. Shared resources may exist on all layers of cloud architecture. By adopting standardized protocols and interfaces, platform federation facilitates enhanced scalability, efficiency, and innovation across different cloud environments while maintaining autonomy and security for each participating entity. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Cloud Edge Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Access Control&lt;br /&gt;
|This component implements a key aspect in terms of security in the management of cloud edge infrastructure, a role-based access control that ensures proper access rights and security across the infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Resource Repository&lt;br /&gt;
|This component keeps a record of the resources available in each of the edge locations, the virtualization platforms available and the configuration. The information in this repository helps the multi-cloud orchestrator to select the right location(s) to deploy workloads. &lt;br /&gt;
|-&lt;br /&gt;
|Workload Inventory&lt;br /&gt;
|This component keeps record of the workloads that have been deployed and their configuration, as well as information about the location and cluster where they have been deployed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Serverless Orchestrator (FaaS)&lt;br /&gt;
|The Serverless Orchestrator provides Serverless capabilities, also known as Function as a Service (FaaS), is a cloud computing model that allows developers to build and deploy applications in the form of individual functions, which are executed in response to specific events or triggers. This model eliminates the need to manage server infrastructure, enabling developers to focus solely on writing code. Each function runs in a stateless container, automatically scaling with demand and only consuming resources when invoked, leading to cost savings and efficient resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|&#039;&#039;&#039;Virtualization&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Manager (BMaaS)&lt;br /&gt;
|The Hardware Resource Manager component delivers a Bare Metal as a Service (BMaaS) service. BMaaS is an abstraction that provides physical, non-virtualized hardware resources directly to users, offering dedicated servers, storage, and networking components without any virtualization layer. This service allows to harness the full power of the hardware for applications, resulting in higher performance, predictable latency, and complete control over the environment. BMaaS is particularly beneficial for workloads that require intensive computation, low-latency networking, or compliance with specific hardware configurations.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Manager (IaaS)&lt;br /&gt;
|The Virtual Infrastructure Manager (VIM) component provides Infrastructure as a Service (IaaS) service. IaaS is a cloud computing model that provides virtualized computing resources. IaaS delivers essential services such as virtual machines, storage, and networks. Users can provision, scale, and manage the resources dynamically according to their needs, while the cloud provider takes care of maintaining the underlying hardware, networking, and security. This model offers high flexibility, enabling organizations to quickly deploy and run applications and services, test new solutions, and handle varying workloads with ease, ultimately driving innovation and operational efficiency. &lt;br /&gt;
|-&lt;br /&gt;
|Container Infrastructure Service Manager (CaaS)&lt;br /&gt;
|The Container Infrastructure Service Manager (CISM) component provides a Container as a Service (CaaS) service. CaaS is a cloud service model that provides a platform allowing users to manage and deploy containerized applications and workloads. By leveraging container orchestration tools such as Kubernetes, CaaS facilitates the automation of container deployment, scaling, and operations, ensuring high availability and performance. This model abstracts the underlying infrastructure complexities, enabling developers and IT teams to focus on application and service development and deployment without worrying about the maintenance of the physical or virtual infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Access Control&lt;br /&gt;
|As in the Cloud Edge Platform layer, this Access Control component implements virtual infrastructure management security, a role-based access control that ensures proper access rights and security for virtual resource management.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge sites and the configuration and availability of virtual resources in each one of them (for instance, number of k8s clusters available per site, CPU/memory available per k8s cluster, number of virtual CPUs that are available to setup new k8s clusters, ...) in order to help take decisions on workload placement. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|&#039;&#039;&#039;Network Systems, SDN controllers&#039;&#039;&#039;&lt;br /&gt;
|It provides the capabilities to manage physical and virtualized/cloudified networking elements to build network services in the geographically distributed Cloud Edge Continuum.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;5&amp;quot;|&#039;&#039;&#039;Virtualization&#039;&#039;&#039;&lt;br /&gt;
|Compute&lt;br /&gt;
|Compute resources are fundamental to cloud infrastructure, delivering the computational power required for running applications and services. They facilitate scalable and efficient environments that dynamically adjust to varying workloads, thus enhancing resource utilization and performance while minimizing costs. &lt;br /&gt;
|-&lt;br /&gt;
|Storage&lt;br /&gt;
|Storage is essential in cloud infrastructure, providing data persistence, management, and accessibility. It includes block storage for databases, object storage for unstructured data, and file storage for shared access applications. Advanced technologies like SSDs and distributed file systems ensure scalability, reliability, and performance. &lt;br /&gt;
|-&lt;br /&gt;
|Networking&lt;br /&gt;
|Hardware networking resources in a cloud edge location include routers, switches, load balancers, and firewalls. These components form the backbone of data center connectivity and inter-server communication. Network Interface Cards (NICs) in servers enable high-throughput connections to the virtual network. WAN gateways and edge routers extend connectivity to external networks, supporting hybrid cloud and remote access scenarios. All hardware is managed centrally through SDN controllers and scaled dynamically to support edge cloud service demands. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Infrastructure Manager&lt;br /&gt;
|A Hardware Infrastructure Manager (also known as Data Center Infrastructure Management system, DCIM4) is a management component designed to monitor, measure, and manage the IT equipment and infrastructure within a cloud edge data center. It encompasses the following key aspects: Monitoring and Management: it provides real-time monitoring of data center operations, including power usage, cooling efficiency, and physical security. This helps in optimizing the performance and efficiency of the data center. Documentation and Planning: it maintains detailed documentation of the data center&#039;s physical and virtual assets. This includes layout planning, capacity management, and future expansion plans. Risk Management: By continuously monitoring environmental conditions and equipment status, it helps in identifying potential risks and mitigating them before they lead to failures. Integration with IT Systems: It integrates with other IT management systems to provide a holistic view of the data center&#039;s operations, facilitating better decision-making and resource allocation. Sustainability and Compliance: It supports sustainability goals by optimizing energy usage and ensuring compliance with industry standards and regulations. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge locations and the configuration and availability of physical hardware resources in each one of them (for instance, number of servers per location, type of servers, type of NIC cards available per location, cost of resources, energy consumption of resources, ...) in order to help take decisions on workload placement and resource lifecycle management. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|&#039;&#039;&#039;Physical Network Resources&#039;&#039;&#039;&lt;br /&gt;
|It includes all the physical hardware resources required to implement the Cloud Edge Continuum (compute, storage and networking). It is closely connected to the physical network infrastructure that supports communication among the computing nodes in the continuum and the connectivity of users to that continuum.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=247</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=247"/>
		<updated>2025-10-22T13:14:46Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Layers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;8&amp;quot;|&#039;&#039;&#039;Management&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Logging&lt;br /&gt;
|Logging in cloud infrastructures refers to the systematic recording of events, transactions, and activities within the cloud environment. This process captures detailed logs of user actions, system operations, and data interactions, creating a repository of information that supports monitoring, auditing, troubleshooting, and security analysis. By maintaining comprehensive and accurate logs, cloud providers and users can trace system behaviors, detect anomalies, and swiftly respond to incidents. Effective logging not only aids in compliance with regulatory requirements but also enhances the overall transparency, reliability, and resilience of the cloud infrastructure.&lt;br /&gt;
|-&lt;br /&gt;
|Monitoring and Alerting&lt;br /&gt;
|Monitoring and alerting in cloud infrastructures involves continuous observation and real-time analysis of system performance, application behavior, and resource utilization. This process employs various tools and techniques to collect and analyze data from different components of the cloud environment, such as servers, networks, and applications. By setting static or AI-supported thresholds and rules, monitoring systems can detect anomalies, performance bottlenecks, and potential failures. When these conditions are met, alerting mechanisms are triggered to notify administrators and stakeholders promptly, enabling swift resolution and minimizing downtime. Effective monitoring and alerting are essential for maintaining the reliability, availability, and overall health of cloud services, ensuring optimal user experiences and adherence to service level agreements (SLAs). Logging, monitoring, and alerting are foundational, providing detailed records and continuous observation of system activities, which support troubleshooting, security analysis, performance optimization, and pro-active intervention. Coupled with alerting mechanisms, they enable real-time detection and swift resolution of anomalies and potential failures, ensuring high availability and reliability.Logging, monitoring, and alerting could use artifacts from the Data layer to implement its functionality.&lt;br /&gt;
|-&lt;br /&gt;
|Accounting and Charging&lt;br /&gt;
|Accounting and charging in cloud infrastructure and services refers to the systematic process of tracking and invoicing the usage of cloud services by users and applications. Accounting involves the continuous collection of data on resource consumption, such as CPU usage, memory allocation, storage, and network bandwidth. This data is then analyzed to generate detailed usage reports, which serve as the basis for customer billing. Charging systems apply predefined pricing models and rates to the metered data, ensuring accurate and transparent charges based on actual usage. This process not only provides customers with clear insights into their cloud expenditures but also enables providers to manage resources efficiently and optimize their service offerings. Effective accounting and charging are essential for tracking resource consumption and generating accurate accounting, fostering transparency. This is critical for maintaining financial accountability, fostering trust, and supporting the scalable and on-demand nature of cloud services. &lt;br /&gt;
|-&lt;br /&gt;
|Performance Management&lt;br /&gt;
|Performance Management in cloud infrastructures refers to the comprehensive process of defining, monitoring, and enforcing Service Level Agreements (SLAs) between cloud service providers and their customers. This applies also to internal performance goals the provider may set internally for its services. This involves setting clear expectations for service performance, availability, and support, and ensuring that these commitments are met consistently. SLA management includes tracking key performance indicators (KPIs), generating compliance reports, and addressing any deviations through corrective actions. Effective performance management not only enhances customer satisfaction and trust but also enables providers to maintain high standards of service quality and reliability, thereby fostering long-term business relationships and competitive advantage. The management of Service Level Agreements (SLAs) ensures that performance and availability commitments are consistently met, enhancing customer satisfaction and trust.&lt;br /&gt;
|-&lt;br /&gt;
|Fault Management&lt;br /&gt;
|Fault Management in cloud infrastructures involves systematic detection, isolation, and resolution of faults or issues within the cloud environment. This process includes identifying potential failures, diagnosing their root causes, and implementing corrective actions to restore normal operations. Fault management leverages workflow and ticketing systems combined with automated tools and techniques to monitor system components, analyze error logs, and trigger alerts for anomalies. By proactively managing faults, cloud providers can minimize downtime, enhance system reliability, and ensure continuous service delivery, ultimately contributing to the overall robustness and resilience of the cloud infrastructure. Fault management processes detect, diagnose, and resolve issues promptly, contributing to the resilience and robustness of the cloud environment.&lt;br /&gt;
|-&lt;br /&gt;
|Catalog/Repository Management&lt;br /&gt;
|It provides inventory information, in different layers, about resources used and available and services available and deployed to take decisions. For instance, list of k8s clusters available (with characteristics), list of applications deployed (in which cluster). In addition, Catalogs allow to implement a model driven approach for services and resources.&lt;br /&gt;
|-&lt;br /&gt;
|Operation Automation&lt;br /&gt;
|This component provides automation for integration, delivery, verification, testing, optimization and other processes in cloud edge environments. It allows us to manage distributed software and infrastructure in an automated way (by code, by software, scripting, declaratively) reducing human errors, making implementations more uniform and predictable and facilitating the reconciliation and recovery to a working configuration after a misconfiguration, disaster or failure. It is usually referred to as XOps (DevOps, MLOps, AIOps, FinOps...). &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|&#039;&#039;&#039;Security and Compliance&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Identity and Access Management&lt;br /&gt;
|In cloud infrastructures, Identity and Access Management (IAM) is a critical framework of policies and technologies that ensure the correct permissions and access rights are assigned to users and devices. This system provides operators with the capability to enforce precise control over resource access, defining who can access specific platform resources and under what conditions. IAM facilitates the automation of user provisioning, the enforcement of robust authentication mechanisms, and the adherence to the principle of least privilege. Coupled with other security measures such as encryption and audit logging, IAM significantly enhances the overall security posture, safeguarding against unauthorized access and potential breaches within the cloud environment. &lt;br /&gt;
|-&lt;br /&gt;
|Identity Management&lt;br /&gt;
|Identity Management (IdM) in cloud infrastructures focuses on the administration and management of user identities and their associated access rights. It involves the creation, maintenance, and deletion of user accounts, as well as the authentication of user identities and the authorization of their actions within the cloud environment. IdM ensures that only authorized users can access specific resources, while maintaining detailed records of user activities and access patterns. By implementing robust IdM solutions, organizations can enhance security, streamline access management, and achieve compliance with regulatory requirements, thereby protecting sensitive data and preventing unauthorized access. &lt;br /&gt;
|-&lt;br /&gt;
|Key Management Service&lt;br /&gt;
|Key Management Service (KMS) is an essential component within cloud infrastructures that provides centralized control over the cryptographic keys used to secure data. KMS facilitates the creation, management, and deletion of encryption keys, ensuring that data remains protected both at rest and in transit. By automating key rotation, enforcing access controls, and integrating seamlessly with other cloud services, KMS enhances the security framework by preventing unauthorized access to sensitive information and simplifying compliance with industry standards. This service is vital for maintaining data confidentiality, integrity, and availability within a cloud environment, thereby reinforcing the overall security posture of the infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Audit Log&lt;br /&gt;
|Audit logs are an indispensable component of cloud infrastructures, serving as detailed records of all activities and access events within the environment. These logs capture a comprehensive trail of user actions, system events, and data access, providing essential visibility into the operation and security of the infrastructure. By systematically documenting every interaction, audit logs enable security professionals to perform thorough investigations, identify potential security incidents, and ensure compliance with regulatory standards. The granular information contained in audit logs supports proactive threat detection, forensic analysis, and the continuous improvement of security measures, thereby bolstering the overall integrity and accountability of the cloud environment. &lt;br /&gt;
|-&lt;br /&gt;
|Account Management&lt;br /&gt;
|In cloud infrastructures, Account Management is the systematic process of overseeing and controlling tenant accounts and their associated privileges. This involves the creation, modification, and deletion of accounts, as well as the assignment and revocation of access rights based on tenant entitlements. By maintaining detailed records of account activities and regularly auditing account permissions, organizations can enhance security, prevent unauthorized access, and ensure compliance with regulatory standards. This proactive management of accounts is crucial for maintaining the integrity and security of the cloud environment. &lt;br /&gt;
|-&lt;br /&gt;
|Service Compliance Verification&lt;br /&gt;
|Service Compliance Verification in cloud infrastructures is the process of ensuring that all cloud services and their operations adhere to predefined regulatory, security, and organizational standards. This involves conducting regular assessments and audits to verify that the services comply with laws such as GDPR, HIPAA, and industry-specific regulations. Compliance verification includes monitoring service configurations, access controls, and data handling practices to identify and rectify any deviations from the required standards. By implementing comprehensive compliance verification measures, organizations can mitigate legal and security risks, demonstrate due diligence, and maintain trust with stakeholders and customers. This systematic approach is essential for maintaining the integrity, security, and legality of cloud operations. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|&#039;&#039;&#039;Application Layer&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Benchmark, Metrics and Monitoring &lt;br /&gt;
|It provides comparisons of energy consumption among application instances, services, sites or hardware elements to identify bad or good performers. It also compares energy consumption needs of applications/workloads with low-cost/renewable energy availability to help in identifying potential energy optimizations based on moving workload to places where energy is free or low-cost. It tracks in real-time the energy consumption of workloads, hardware elements (servers, storage, networking), sites and services, and the energy generation of associated renewable energy sources. It also monitors the resource consumption per service or application, the waste and heat generation, and other potential environmental protection measures. This component is setting the standards for monitoring services required to support environmental targets and energy cost reduction as well as Financial Optimization of 8ra services and infrastructure These data are subject to be used by the Optimization component for forecasting, planning and decision making around energy and resource consumption. It identifies when applications, hardware elements, nodes or facilities exceed certain thresholds in terms of energy or resource consumption, triggering alerts. The Monitoring component also tracks the availability and cost of energy in energy generation locations, identifying those where free or low-cost energy is available. These are energy producers or consumers with excess energy capacity that otherwise would be wasted if not consumed, like wind turbines, solar power plants and data centers. The component also gives recommendations for data collection frameworks and tools.&lt;br /&gt;
|-&lt;br /&gt;
|Energy Consumption and carbon emission &lt;br /&gt;
|Optimization. This component implements mechanisms to optimize resource and energy consumption and carbon emission, for instance, scaling down the resources allocated to a certain application when they are not required, and it is not foreseen they will be in the short term. It is in charge of implementing a “green” operational mode for cloud-edge nodes and platforms. It provides recommendations for optimization of application and infrastructure setup based on data from benchmark and monitoring. This component may also provide recommendations with respect to application placement to reduce data transfer, optimize energy consumption or balance occupation among sites (reduce risk of congestion). At Data layer, it advises on data fusion and data reuse options to minimize redundant data exchange.&lt;br /&gt;
|-&lt;br /&gt;
|Sustainable Workload placement and scheduling&lt;br /&gt;
|It generates recommendations (conventional or AI based) of temporal and spatial workload movement to reduce energy costs and/or carbon emissions. Recommendations are usually oriented towards moving workloads temporarily to energy generation sites where energy is going to be wasted (free or low-cost). For this, it forecasts and matches workload requirements with server capacity at these sites with exceeding energy. It leverages algorithmic to maximize energy efficiency and resource optimization and minimize environmental impact. Workload placement considers also the amount of data communication and latency and cost of migration into the selection of the optimum location. The Scheduler plans and implements the recommendations of the Resource Optimization component, moving workloads temporarily to low-cost energy generation locations, and providing workload execution assurance.  The implementation is integrated into or used by a (de-)central Multi-Cloud Orchestrator, that is the component able to move workload between sites.&lt;br /&gt;
|-&lt;br /&gt;
|Renewable Energy Management &lt;br /&gt;
|It gives recommendations for near energy production (wind, sun) placement of datacenters and integration of battery storage systems to balance grid capacity. &lt;br /&gt;
|-&lt;br /&gt;
|Cooling and Heat Management &lt;br /&gt;
|It enhances energy and operational efficiency by integrating cooling technology and real-time optimization of cooling and workload.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|&#039;&#039;&#039;Application Layer&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Application Designer&lt;br /&gt;
|This component enables developers to design, create and customize applications using intuitive interfaces or predefined templates. It facilitates rapid development, integration and delivery of tailored applications using automated CI/CD practices (DevOps). The Application Designer facilitates the description of the application in terms of: set of application components it is made of and how they are connected (service function chain), runtime environment each of the application components will require, including the set of functions/services to support its execution, the attributes that may allow the selection of the computing node to host it (hardware requirements, latency, privacy, etc. ). &lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|The Application Packager supports the packaging of applications for its deployment in the Cloud-Edge continuum. It facilitates the automation of application deployment and update (DevOps, both traditional and AI-assisted), providing an integrated toolkit that enables quick, secure and innovative ways to deploy cloud-aware applications. It also provides tools for automatic verification and validation (CV/CT) of the application and its supply chain before its final packaging. &lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|This component provides the interface to invoke and use the applications contained in the catalog. It checks the identity and authenticates the user and checks his authorization to use the application before providing access to it. &lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|It tracks application usage and execution, monitors the performance and identifies abnormal behavior and suboptimal use of resources. &lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|It implements a directory of applications and functions that the providers have made available. They contain the characteristics of the application and the environment it requires for its execution (runtime, services, hardware characteristics). &lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|This component implements the accounting of application usage and provides online charging information for the customer to track application expenditure in real-time. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|&#039;&#039;&#039;Data Layer&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Data Pipelines&lt;br /&gt;
|This component provides the functionality for data collection, including the connectors to integrate with the data sources and the capabilities for data curation and pre-processing that ensure its quality and readiness for analytics, insight generation, training, modelling or inferencing phases. &lt;br /&gt;
|-&lt;br /&gt;
|Data Modelling&lt;br /&gt;
|This component enables data cataloguing to enable exposure and discovery at scale to easily search, find and browse data, over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Exposure&lt;br /&gt;
|The Data Exposure component provides customers with standard mechanisms and interfaces for safe and controlled access to data. It includes capabilities for making data offers and contract data acquisition, identity checking and data access authentication and authorization. &lt;br /&gt;
|-&lt;br /&gt;
|Data Policy Control&lt;br /&gt;
|Data Policy Control sets the required policies for data sharing, providing a safe, controlled and regulation-compliant environment for data exchange. It allows the data owner to manage the permissions to access its data: who can make it, at which conditions and for which purposes. &lt;br /&gt;
|-&lt;br /&gt;
|Data Catalog&lt;br /&gt;
|Data Catalog provides efficient storage and indexing of data to facilitate browsing, searching and finding data over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Federation&lt;br /&gt;
|Data Federation enables standard mechanisms and interfaces (connectors) for partnering in the provision of datasets, providing a unified view of data catalogs and databases from multiple data providers. This component enables real-time data exchange across companies using data mesh principles, connecting distributed and heterogeneous actors over the cloud-edge continuum, keeping data owners in full control of their data. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Data Federation capabilities should be designed consistently with the other federation capabilities described in this document.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|&#039;&#039;&#039;AI Layer&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Training&lt;br /&gt;
|This component facilitates the dynamic and adjustable training of AI models across cloud and edge environments, ensuring scalability, reduced latency, and optimized resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Inference&lt;br /&gt;
|The Inference components facilitate real-time deployment and execution of trained AI models on edge devices with efficient synchronization with the cloud for updates, monitoring, and enhancements. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Agent Manager&lt;br /&gt;
|The Cloud-Edge Agent Manager enables the deployment and management of agents and agentic workflows on edge and hybrid edge-cloud deployments creating an agentic mesh. &lt;br /&gt;
|-&lt;br /&gt;
|AI Model Catalog&lt;br /&gt;
|This component contains trained foundational models: LLMs, SLMs, multimodal LLMs, in multiple languages and managing multiple data types: text, images, video, code, etc. These models provide support for Natural Language Processing (NLP), Machine Translation (MT), speech processing, text analysis, information extraction, summarization or text and speech generation. They can be fine-tuned and adapted to specific use cases, using techniques like RAG, model quantization, pruning or distillation. The catalog contains multilingual and multimodal LLMs tailored to diverse EU languages, capable of understanding and processing diverse data types, including text, images, and multimedia. These models address the scarcity of generative AI solutions in non-English languages, ensuring semantic precision, completeness, and compliance with the AI Act. &lt;br /&gt;
|-&lt;br /&gt;
|Federated Learning&lt;br /&gt;
|AI workloads can be split across multiple nodes with central orchestration for scalability and efficiency (Distributed AI). AI Federation enables autonomous nodes to collaborate securely, ensuring privacy and sovereignty. Together, they balance task-sharing efficiency with autonomy. In distributed AI training, the AI model is generated at a central point based on the combination of models produced by different training agents distributed across a ecosystem of federated AI service providers or owners. The distributed training agents work locally on local datasets reducing the need to transfer data to a central location for training. This component allows to use and orchestrate AI resources across multiple providers to collaboratively perform a specific machine learning training task. It leverages a federated network of AI capabilities geographically distributed across the multi-provider Cloud Edge Continuum, enabling seamless resource sharing and scaling while maintaining sovereignty and compliance. It ensures efficient distribution of AI computational workloads, minimizes data movement, and facilitates parallel model training without requiring centralized data aggregation, thus preserving data privacy and autonomy while enhancing overall system performance. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Federated Learning capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|AI Explainability&lt;br /&gt;
|This Explainable AI component ensures transparency by providing interpretable insights into AI decision-making processes. It supports compliance, accountability, and trust by enabling users and regulators to understand, audit, and validate AI models while respecting privacy and data sovereignty. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;5&amp;quot;|&#039;&#039;&#039;Service Orchestration&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Service Orchestrator&lt;br /&gt;
|Service orchestration assures efficient tasks execution, load balancing and real-time operations. For example, it could communicate with the Multi-Cloud Orchestrator that manages the virtualized infrastructure layer offering a single unified environment for application development and monitoring. This allows applications and services to be deployed seamlessly across multiple platforms, optimizing resource allocation and reducing operational complexity. Alternatively, the Service Orchestrator may directly or indirectly interact with the underlying capabilities of the cloud platform or virtualization management layer to orchestrate workload execution.  The Service Orchestrator automates application and tenant deployment, and lifecycle management processes. By automating workflows (or service function chains), orchestration ensures that services communicate efficiently across the cloud-edge continuum. &lt;br /&gt;
|-&lt;br /&gt;
|Application Performance Management&lt;br /&gt;
|It monitors the performance and resource consumption of the application or service and communicates deviations from set thresholds or SLAs to the Service Orchestrator for this to take actions to recover a state that meets application requirements. It provides a unified view of states, including logging, monitoring, and alerting, for effective real-time application management and validation at runtime. &lt;br /&gt;
|-&lt;br /&gt;
|Application Repository&lt;br /&gt;
|This component tracks the applications and services that have been deployed and their configuration, the locations where the application and service components are installed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Service Federation&lt;br /&gt;
|This component interconnects the Service Orchestrator with those of other federated providers, enabling the deployment and execution of applications (service function chains) across multiple providers in a seamless way, interacting with a single provider. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Service Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;12&amp;quot;|&#039;&#039;&#039;Cloud Edge Platform&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cloud Orchestrator (PaaS)&lt;br /&gt;
|Multi-Cloud Orchestrator delivers a Platform as a Service (PaaS) service. A PaaS provides a complete application development and deployment environment in the cloud. With PaaS, customers can build, test, deploy, manage, and update applications quickly and efficiently, without worrying about the underlying infrastructure. It receives from the Service Orchestrator a request to deploy (or manage the lifecycle of) a certain application together with a descriptor (resource model) that defines the state the application needs for its execution (including runtime environment, services, data, application image and other attributes like area of service, performance...). MCO processes the state and takes actions to set it up and preserve it, by updating, upgrading or removing workloads and services, or rescaling or releasing resources. MCO works in close relationship with other components (PIM, VIP, MCM, Serverless orchestrator) to provide the virtual runtime environment defined for the application, the specific combination of bare metal, virtual machine, containers, serverless mechanisms it has been developed to run on, using the technologies over which it has been tested and certified. MCO also deploys and manages the lifecycle of essential tools and services such as middleware, development frameworks, databases, and business analytics, enabling organizations to streamline application development and drive innovation. A PaaS, managed by the MCO, offers scalability, high availability, and reduced time-to-market, allowing developers to focus on coding and application functionality while the MCO supports with infrastructure, security, and operational aspects. Based on certain attributes, like area of service and performance, the MCO may select the location(s) where to deploy the workload and the resources (physical and virtual) required at those location(s) to meet the desired state. This decision on application placement can also follow sustainability and privacy requirements. The MCO deploys the workload once the necessary resources are available, using the Workload Deployment Manager. The MCO also updates and removes workloads, rescaling or releasing the corresponding resources. This MCO description shows a decomposition of the functionality of a cloud-edge continuum workload management solution that may be implemented in many ways, combining or excluding some of its components in order to fit specific sector needs. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Connectivity Manager&lt;br /&gt;
|The Cloud Edge Connectivity Manager (CEC) implements and modifies the service function chain, or removes it, totally or partially, following the requests from a Service Orchestrator, to guarantee the connectivity between workloads that will enable the service delivery and the connectivity from the service user to the workloads implementing the service front-end. Connectivity is usually based on overlay and underlay components in each domain crossed by the traffic (e.g. WAN, data centers, etc.). The CEC manages the networking in the data center domain through the virtualization managers (VIM, CISM) or via specific NaaS interfaces. It manages the WAN connectivity using Cloud Networking services (via transport SDN Controllers) for the connection of different computing nodes. In addition, the CEC manages the complexity deriving from the need to ensure consistency between overlay and underlay networking solutions (for example adapting the networking between the data center fabric and the AAN connectivity). &lt;br /&gt;
|-&lt;br /&gt;
|Physical Infrastructure Manager&lt;br /&gt;
|The Physical Infrastructure Manager (PIM) monitors and manages a pool of physical resources (CPUs, storage, networking), and selects and prepares them (with the corresponding OS and necessary software) to allocate these resources to a virtual machine or container cluster. The PIM provides multiple physical infrastructure management functions, including physical resource provisioning and lifecycle management, physical resource inventory management or physical resource performance management. &lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cluster Manager&lt;br /&gt;
|The Multi-Cluster Manager (MCM) creates and configures container clusters both over bare metal and over virtual machines after a request from the MCO, offering a single interface to manage infrastructure from multiple providers and with multiple K8s distributions. The MCM provides open connectors/APIs to interact with the resources and k8s distributions offered by different providers (private &amp;amp; public) for cluster creation, configuration and monitoring, and keeps track of their evolution. The MCM may create a K8s cluster on bare metal (cluster nodes are servers) or on virtualization stack (cluster nodes are VMs), interacting with PIM or VIP respectively. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Platform Manager&lt;br /&gt;
|The Virtual Infrastructure Platform Manager (VIP) creates virtual machine clusters across several locations using the resources allocated by the PIM. The VIP is required when the service component to be deployed is a Virtualized Application or a Containerized Application, which runs over container clusters that make use of VMs (virtual machines). This component works on infrastructure and technology from different providers, enabling the Cloud Edge continuum to run on a diverse set of different virtualization solutions (VIMs, CISMs or any other future virtualization technology). &lt;br /&gt;
|-&lt;br /&gt;
|Workload Deployment Manager&lt;br /&gt;
|The Workload Deployment Manager (WDM) deploys software package(s) on top of an existing cluster(s) following MCO requests. It exposes a single interface to deploy software packages (i.e. via a helm chart or resource model declaration) on any K8s cluster (or alike) based on any distribution. The WDM provides the connectors/APIs to interact with existing clusters in different locations &amp;amp; technologies (K8s distributions) for application deployment and lifecycle management. This component can also deploy software packages directly on virtual machines (IaaS). &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Federation&lt;br /&gt;
|This component interconnects the Multi-Cloud Orchestrator with the ones of other federated providers, enabling the customer the use of cloud edge computing services (IaaS, CaaS, PaaS, Serverless, NaaS...) across multiple providers in a seamless way, interacting with a single provider. This platform federation provides seamless integration and collaboration between multiple cloud platform providers, enabling interoperability, resource sharing, and unified lifecycle management. Shared resources may exist on all layers of cloud architecture. By adopting standardized protocols and interfaces, platform federation facilitates enhanced scalability, efficiency, and innovation across different cloud environments while maintaining autonomy and security for each participating entity. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Cloud Edge Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Access Control&lt;br /&gt;
|This component implements a key aspect in terms of security in the management of cloud edge infrastructure, a role-based access control that ensures proper access rights and security across the infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Resource Repository&lt;br /&gt;
|This component keeps a record of the resources available in each of the edge locations, the virtualization platforms available and the configuration. The information in this repository helps the multi-cloud orchestrator to select the right location(s) to deploy workloads. &lt;br /&gt;
|-&lt;br /&gt;
|Workload Inventory&lt;br /&gt;
|This component keeps record of the workloads that have been deployed and their configuration, as well as information about the location and cluster where they have been deployed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Serverless Orchestrator (FaaS)&lt;br /&gt;
|The Serverless Orchestrator provides Serverless capabilities, also known as Function as a Service (FaaS), is a cloud computing model that allows developers to build and deploy applications in the form of individual functions, which are executed in response to specific events or triggers. This model eliminates the need to manage server infrastructure, enabling developers to focus solely on writing code. Each function runs in a stateless container, automatically scaling with demand and only consuming resources when invoked, leading to cost savings and efficient resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|&#039;&#039;&#039;Virtualization&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Manager (BMaaS)&lt;br /&gt;
|The Hardware Resource Manager component delivers a Bare Metal as a Service (BMaaS) service. BMaaS is an abstraction that provides physical, non-virtualized hardware resources directly to users, offering dedicated servers, storage, and networking components without any virtualization layer. This service allows to harness the full power of the hardware for applications, resulting in higher performance, predictable latency, and complete control over the environment. BMaaS is particularly beneficial for workloads that require intensive computation, low-latency networking, or compliance with specific hardware configurations.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Manager (IaaS)&lt;br /&gt;
|The Virtual Infrastructure Manager (VIM) component provides Infrastructure as a Service (IaaS) service. IaaS is a cloud computing model that provides virtualized computing resources. IaaS delivers essential services such as virtual machines, storage, and networks. Users can provision, scale, and manage the resources dynamically according to their needs, while the cloud provider takes care of maintaining the underlying hardware, networking, and security. This model offers high flexibility, enabling organizations to quickly deploy and run applications and services, test new solutions, and handle varying workloads with ease, ultimately driving innovation and operational efficiency. &lt;br /&gt;
|-&lt;br /&gt;
|Container Infrastructure Service Manager (CaaS)&lt;br /&gt;
|The Container Infrastructure Service Manager (CISM) component provides a Container as a Service (CaaS) service. CaaS is a cloud service model that provides a platform allowing users to manage and deploy containerized applications and workloads. By leveraging container orchestration tools such as Kubernetes, CaaS facilitates the automation of container deployment, scaling, and operations, ensuring high availability and performance. This model abstracts the underlying infrastructure complexities, enabling developers and IT teams to focus on application and service development and deployment without worrying about the maintenance of the physical or virtual infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Access Control&lt;br /&gt;
|As in the Cloud Edge Platform layer, this Access Control component implements virtual infrastructure management security, a role-based access control that ensures proper access rights and security for virtual resource management.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge sites and the configuration and availability of virtual resources in each one of them (for instance, number of k8s clusters available per site, CPU/memory available per k8s cluster, number of virtual CPUs that are available to setup new k8s clusters, ...) in order to help take decisions on workload placement. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|&#039;&#039;&#039;Network Systems, SDN controllers&#039;&#039;&#039;&lt;br /&gt;
|It provides the capabilities to manage physical and virtualized/cloudified networking elements to build network services in the geographically distributed Cloud Edge Continuum.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;5&amp;quot;|&#039;&#039;&#039;Virtualization&#039;&#039;&#039;&lt;br /&gt;
|Compute&lt;br /&gt;
|Compute resources are fundamental to cloud infrastructure, delivering the computational power required for running applications and services. They facilitate scalable and efficient environments that dynamically adjust to varying workloads, thus enhancing resource utilization and performance while minimizing costs. &lt;br /&gt;
|-&lt;br /&gt;
|Storage&lt;br /&gt;
|Storage is essential in cloud infrastructure, providing data persistence, management, and accessibility. It includes block storage for databases, object storage for unstructured data, and file storage for shared access applications. Advanced technologies like SSDs and distributed file systems ensure scalability, reliability, and performance. &lt;br /&gt;
|-&lt;br /&gt;
|Networking&lt;br /&gt;
|Hardware networking resources in a cloud edge location include routers, switches, load balancers, and firewalls. These components form the backbone of data center connectivity and inter-server communication. Network Interface Cards (NICs) in servers enable high-throughput connections to the virtual network. WAN gateways and edge routers extend connectivity to external networks, supporting hybrid cloud and remote access scenarios. All hardware is managed centrally through SDN controllers and scaled dynamically to support edge cloud service demands. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Infrastructure Manager&lt;br /&gt;
|A Hardware Infrastructure Manager (also known as Data Center Infrastructure Management system, DCIM4) is a management component designed to monitor, measure, and manage the IT equipment and infrastructure within a cloud edge data center. It encompasses the following key aspects: Monitoring and Management: it provides real-time monitoring of data center operations, including power usage, cooling efficiency, and physical security. This helps in optimizing the performance and efficiency of the data center. Documentation and Planning: it maintains detailed documentation of the data center&#039;s physical and virtual assets. This includes layout planning, capacity management, and future expansion plans. Risk Management: By continuously monitoring environmental conditions and equipment status, it helps in identifying potential risks and mitigating them before they lead to failures. Integration with IT Systems: It integrates with other IT management systems to provide a holistic view of the data center&#039;s operations, facilitating better decision-making and resource allocation. Sustainability and Compliance: It supports sustainability goals by optimizing energy usage and ensuring compliance with industry standards and regulations. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge locations and the configuration and availability of physical hardware resources in each one of them (for instance, number of servers per location, type of servers, type of NIC cards available per location, cost of resources, energy consumption of resources, ...) in order to help take decisions on workload placement and resource lifecycle management. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|&#039;&#039;&#039;Physical Network Resources&#039;&#039;&#039;&lt;br /&gt;
|It includes all the physical hardware resources required to implement the Cloud Edge Continuum (compute, storage and networking). It is closely connected to the physical network infrastructure that supports communication among the computing nodes in the continuum and the connectivity of users to that continuum.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=246</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=246"/>
		<updated>2025-10-22T13:05:05Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Layers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|&#039;&#039;&#039;Application Layer&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Application Designer&lt;br /&gt;
|This component enables developers to design, create and customize applications using intuitive interfaces or predefined templates. It facilitates rapid development, integration and delivery of tailored applications using automated CI/CD practices (DevOps). The Application Designer facilitates the description of the application in terms of: set of application components it is made of and how they are connected (service function chain), runtime environment each of the application components will require, including the set of functions/services to support its execution, the attributes that may allow the selection of the computing node to host it (hardware requirements, latency, privacy, etc. ). &lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|The Application Packager supports the packaging of applications for its deployment in the Cloud-Edge continuum. It facilitates the automation of application deployment and update (DevOps, both traditional and AI-assisted), providing an integrated toolkit that enables quick, secure and innovative ways to deploy cloud-aware applications. It also provides tools for automatic verification and validation (CV/CT) of the application and its supply chain before its final packaging. &lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|This component provides the interface to invoke and use the applications contained in the catalog. It checks the identity and authenticates the user and checks his authorization to use the application before providing access to it. &lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|It tracks application usage and execution, monitors the performance and identifies abnormal behavior and suboptimal use of resources. &lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|It implements a directory of applications and functions that the providers have made available. They contain the characteristics of the application and the environment it requires for its execution (runtime, services, hardware characteristics). &lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|This component implements the accounting of application usage and provides online charging information for the customer to track application expenditure in real-time. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|&#039;&#039;&#039;Data Layer&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Data Pipelines&lt;br /&gt;
|This component provides the functionality for data collection, including the connectors to integrate with the data sources and the capabilities for data curation and pre-processing that ensure its quality and readiness for analytics, insight generation, training, modelling or inferencing phases. &lt;br /&gt;
|-&lt;br /&gt;
|Data Modelling&lt;br /&gt;
|This component enables data cataloguing to enable exposure and discovery at scale to easily search, find and browse data, over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Exposure&lt;br /&gt;
|The Data Exposure component provides customers with standard mechanisms and interfaces for safe and controlled access to data. It includes capabilities for making data offers and contract data acquisition, identity checking and data access authentication and authorization. &lt;br /&gt;
|-&lt;br /&gt;
|Data Policy Control&lt;br /&gt;
|Data Policy Control sets the required policies for data sharing, providing a safe, controlled and regulation-compliant environment for data exchange. It allows the data owner to manage the permissions to access its data: who can make it, at which conditions and for which purposes. &lt;br /&gt;
|-&lt;br /&gt;
|Data Catalog&lt;br /&gt;
|Data Catalog provides efficient storage and indexing of data to facilitate browsing, searching and finding data over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Federation&lt;br /&gt;
|Data Federation enables standard mechanisms and interfaces (connectors) for partnering in the provision of datasets, providing a unified view of data catalogs and databases from multiple data providers. This component enables real-time data exchange across companies using data mesh principles, connecting distributed and heterogeneous actors over the cloud-edge continuum, keeping data owners in full control of their data. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Data Federation capabilities should be designed consistently with the other federation capabilities described in this document.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|&#039;&#039;&#039;AI Layer&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Training&lt;br /&gt;
|This component facilitates the dynamic and adjustable training of AI models across cloud and edge environments, ensuring scalability, reduced latency, and optimized resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Inference&lt;br /&gt;
|The Inference components facilitate real-time deployment and execution of trained AI models on edge devices with efficient synchronization with the cloud for updates, monitoring, and enhancements. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Agent Manager&lt;br /&gt;
|The Cloud-Edge Agent Manager enables the deployment and management of agents and agentic workflows on edge and hybrid edge-cloud deployments creating an agentic mesh. &lt;br /&gt;
|-&lt;br /&gt;
|AI Model Catalog&lt;br /&gt;
|This component contains trained foundational models: LLMs, SLMs, multimodal LLMs, in multiple languages and managing multiple data types: text, images, video, code, etc. These models provide support for Natural Language Processing (NLP), Machine Translation (MT), speech processing, text analysis, information extraction, summarization or text and speech generation. They can be fine-tuned and adapted to specific use cases, using techniques like RAG, model quantization, pruning or distillation. The catalog contains multilingual and multimodal LLMs tailored to diverse EU languages, capable of understanding and processing diverse data types, including text, images, and multimedia. These models address the scarcity of generative AI solutions in non-English languages, ensuring semantic precision, completeness, and compliance with the AI Act. &lt;br /&gt;
|-&lt;br /&gt;
|Federated Learning&lt;br /&gt;
|AI workloads can be split across multiple nodes with central orchestration for scalability and efficiency (Distributed AI). AI Federation enables autonomous nodes to collaborate securely, ensuring privacy and sovereignty. Together, they balance task-sharing efficiency with autonomy. In distributed AI training, the AI model is generated at a central point based on the combination of models produced by different training agents distributed across a ecosystem of federated AI service providers or owners. The distributed training agents work locally on local datasets reducing the need to transfer data to a central location for training. This component allows to use and orchestrate AI resources across multiple providers to collaboratively perform a specific machine learning training task. It leverages a federated network of AI capabilities geographically distributed across the multi-provider Cloud Edge Continuum, enabling seamless resource sharing and scaling while maintaining sovereignty and compliance. It ensures efficient distribution of AI computational workloads, minimizes data movement, and facilitates parallel model training without requiring centralized data aggregation, thus preserving data privacy and autonomy while enhancing overall system performance. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Federated Learning capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|AI Explainability&lt;br /&gt;
|This Explainable AI component ensures transparency by providing interpretable insights into AI decision-making processes. It supports compliance, accountability, and trust by enabling users and regulators to understand, audit, and validate AI models while respecting privacy and data sovereignty. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;5&amp;quot;|&#039;&#039;&#039;Service Orchestration&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Service Orchestrator&lt;br /&gt;
|Service orchestration assures efficient tasks execution, load balancing and real-time operations. For example, it could communicate with the Multi-Cloud Orchestrator that manages the virtualized infrastructure layer offering a single unified environment for application development and monitoring. This allows applications and services to be deployed seamlessly across multiple platforms, optimizing resource allocation and reducing operational complexity. Alternatively, the Service Orchestrator may directly or indirectly interact with the underlying capabilities of the cloud platform or virtualization management layer to orchestrate workload execution.  The Service Orchestrator automates application and tenant deployment, and lifecycle management processes. By automating workflows (or service function chains), orchestration ensures that services communicate efficiently across the cloud-edge continuum. &lt;br /&gt;
|-&lt;br /&gt;
|Application Performance Management&lt;br /&gt;
|It monitors the performance and resource consumption of the application or service and communicates deviations from set thresholds or SLAs to the Service Orchestrator for this to take actions to recover a state that meets application requirements. It provides a unified view of states, including logging, monitoring, and alerting, for effective real-time application management and validation at runtime. &lt;br /&gt;
|-&lt;br /&gt;
|Application Repository&lt;br /&gt;
|This component tracks the applications and services that have been deployed and their configuration, the locations where the application and service components are installed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Service Federation&lt;br /&gt;
|This component interconnects the Service Orchestrator with those of other federated providers, enabling the deployment and execution of applications (service function chains) across multiple providers in a seamless way, interacting with a single provider. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Service Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;12&amp;quot;|&#039;&#039;&#039;Cloud Edge Platform&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cloud Orchestrator (PaaS)&lt;br /&gt;
|Multi-Cloud Orchestrator delivers a Platform as a Service (PaaS) service. A PaaS provides a complete application development and deployment environment in the cloud. With PaaS, customers can build, test, deploy, manage, and update applications quickly and efficiently, without worrying about the underlying infrastructure. It receives from the Service Orchestrator a request to deploy (or manage the lifecycle of) a certain application together with a descriptor (resource model) that defines the state the application needs for its execution (including runtime environment, services, data, application image and other attributes like area of service, performance...). MCO processes the state and takes actions to set it up and preserve it, by updating, upgrading or removing workloads and services, or rescaling or releasing resources. MCO works in close relationship with other components (PIM, VIP, MCM, Serverless orchestrator) to provide the virtual runtime environment defined for the application, the specific combination of bare metal, virtual machine, containers, serverless mechanisms it has been developed to run on, using the technologies over which it has been tested and certified. MCO also deploys and manages the lifecycle of essential tools and services such as middleware, development frameworks, databases, and business analytics, enabling organizations to streamline application development and drive innovation. A PaaS, managed by the MCO, offers scalability, high availability, and reduced time-to-market, allowing developers to focus on coding and application functionality while the MCO supports with infrastructure, security, and operational aspects. Based on certain attributes, like area of service and performance, the MCO may select the location(s) where to deploy the workload and the resources (physical and virtual) required at those location(s) to meet the desired state. This decision on application placement can also follow sustainability and privacy requirements. The MCO deploys the workload once the necessary resources are available, using the Workload Deployment Manager. The MCO also updates and removes workloads, rescaling or releasing the corresponding resources. This MCO description shows a decomposition of the functionality of a cloud-edge continuum workload management solution that may be implemented in many ways, combining or excluding some of its components in order to fit specific sector needs. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Connectivity Manager&lt;br /&gt;
|The Cloud Edge Connectivity Manager (CEC) implements and modifies the service function chain, or removes it, totally or partially, following the requests from a Service Orchestrator, to guarantee the connectivity between workloads that will enable the service delivery and the connectivity from the service user to the workloads implementing the service front-end. Connectivity is usually based on overlay and underlay components in each domain crossed by the traffic (e.g. WAN, data centers, etc.). The CEC manages the networking in the data center domain through the virtualization managers (VIM, CISM) or via specific NaaS interfaces. It manages the WAN connectivity using Cloud Networking services (via transport SDN Controllers) for the connection of different computing nodes. In addition, the CEC manages the complexity deriving from the need to ensure consistency between overlay and underlay networking solutions (for example adapting the networking between the data center fabric and the AAN connectivity). &lt;br /&gt;
|-&lt;br /&gt;
|Physical Infrastructure Manager&lt;br /&gt;
|The Physical Infrastructure Manager (PIM) monitors and manages a pool of physical resources (CPUs, storage, networking), and selects and prepares them (with the corresponding OS and necessary software) to allocate these resources to a virtual machine or container cluster. The PIM provides multiple physical infrastructure management functions, including physical resource provisioning and lifecycle management, physical resource inventory management or physical resource performance management. &lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cluster Manager&lt;br /&gt;
|The Multi-Cluster Manager (MCM) creates and configures container clusters both over bare metal and over virtual machines after a request from the MCO, offering a single interface to manage infrastructure from multiple providers and with multiple K8s distributions. The MCM provides open connectors/APIs to interact with the resources and k8s distributions offered by different providers (private &amp;amp; public) for cluster creation, configuration and monitoring, and keeps track of their evolution. The MCM may create a K8s cluster on bare metal (cluster nodes are servers) or on virtualization stack (cluster nodes are VMs), interacting with PIM or VIP respectively. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Platform Manager&lt;br /&gt;
|The Virtual Infrastructure Platform Manager (VIP) creates virtual machine clusters across several locations using the resources allocated by the PIM. The VIP is required when the service component to be deployed is a Virtualized Application or a Containerized Application, which runs over container clusters that make use of VMs (virtual machines). This component works on infrastructure and technology from different providers, enabling the Cloud Edge continuum to run on a diverse set of different virtualization solutions (VIMs, CISMs or any other future virtualization technology). &lt;br /&gt;
|-&lt;br /&gt;
|Workload Deployment Manager&lt;br /&gt;
|The Workload Deployment Manager (WDM) deploys software package(s) on top of an existing cluster(s) following MCO requests. It exposes a single interface to deploy software packages (i.e. via a helm chart or resource model declaration) on any K8s cluster (or alike) based on any distribution. The WDM provides the connectors/APIs to interact with existing clusters in different locations &amp;amp; technologies (K8s distributions) for application deployment and lifecycle management. This component can also deploy software packages directly on virtual machines (IaaS). &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Federation&lt;br /&gt;
|This component interconnects the Multi-Cloud Orchestrator with the ones of other federated providers, enabling the customer the use of cloud edge computing services (IaaS, CaaS, PaaS, Serverless, NaaS...) across multiple providers in a seamless way, interacting with a single provider. This platform federation provides seamless integration and collaboration between multiple cloud platform providers, enabling interoperability, resource sharing, and unified lifecycle management. Shared resources may exist on all layers of cloud architecture. By adopting standardized protocols and interfaces, platform federation facilitates enhanced scalability, efficiency, and innovation across different cloud environments while maintaining autonomy and security for each participating entity. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Cloud Edge Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Access Control&lt;br /&gt;
|This component implements a key aspect in terms of security in the management of cloud edge infrastructure, a role-based access control that ensures proper access rights and security across the infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Resource Repository&lt;br /&gt;
|This component keeps a record of the resources available in each of the edge locations, the virtualization platforms available and the configuration. The information in this repository helps the multi-cloud orchestrator to select the right location(s) to deploy workloads. &lt;br /&gt;
|-&lt;br /&gt;
|Workload Inventory&lt;br /&gt;
|This component keeps record of the workloads that have been deployed and their configuration, as well as information about the location and cluster where they have been deployed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Serverless Orchestrator (FaaS)&lt;br /&gt;
|The Serverless Orchestrator provides Serverless capabilities, also known as Function as a Service (FaaS), is a cloud computing model that allows developers to build and deploy applications in the form of individual functions, which are executed in response to specific events or triggers. This model eliminates the need to manage server infrastructure, enabling developers to focus solely on writing code. Each function runs in a stateless container, automatically scaling with demand and only consuming resources when invoked, leading to cost savings and efficient resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|&#039;&#039;&#039;Virtualization&#039;&#039;&#039;&lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Manager (BMaaS)&lt;br /&gt;
|The Hardware Resource Manager component delivers a Bare Metal as a Service (BMaaS) service. BMaaS is an abstraction that provides physical, non-virtualized hardware resources directly to users, offering dedicated servers, storage, and networking components without any virtualization layer. This service allows to harness the full power of the hardware for applications, resulting in higher performance, predictable latency, and complete control over the environment. BMaaS is particularly beneficial for workloads that require intensive computation, low-latency networking, or compliance with specific hardware configurations.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Manager (IaaS)&lt;br /&gt;
|The Virtual Infrastructure Manager (VIM) component provides Infrastructure as a Service (IaaS) service. IaaS is a cloud computing model that provides virtualized computing resources. IaaS delivers essential services such as virtual machines, storage, and networks. Users can provision, scale, and manage the resources dynamically according to their needs, while the cloud provider takes care of maintaining the underlying hardware, networking, and security. This model offers high flexibility, enabling organizations to quickly deploy and run applications and services, test new solutions, and handle varying workloads with ease, ultimately driving innovation and operational efficiency. &lt;br /&gt;
|-&lt;br /&gt;
|Container Infrastructure Service Manager (CaaS)&lt;br /&gt;
|The Container Infrastructure Service Manager (CISM) component provides a Container as a Service (CaaS) service. CaaS is a cloud service model that provides a platform allowing users to manage and deploy containerized applications and workloads. By leveraging container orchestration tools such as Kubernetes, CaaS facilitates the automation of container deployment, scaling, and operations, ensuring high availability and performance. This model abstracts the underlying infrastructure complexities, enabling developers and IT teams to focus on application and service development and deployment without worrying about the maintenance of the physical or virtual infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Access Control&lt;br /&gt;
|As in the Cloud Edge Platform layer, this Access Control component implements virtual infrastructure management security, a role-based access control that ensures proper access rights and security for virtual resource management.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge sites and the configuration and availability of virtual resources in each one of them (for instance, number of k8s clusters available per site, CPU/memory available per k8s cluster, number of virtual CPUs that are available to setup new k8s clusters, ...) in order to help take decisions on workload placement. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|&#039;&#039;&#039;Network Systems, SDN controllers&#039;&#039;&#039;&lt;br /&gt;
|It provides the capabilities to manage physical and virtualized/cloudified networking elements to build network services in the geographically distributed Cloud Edge Continuum.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;5&amp;quot;|&#039;&#039;&#039;Virtualization&#039;&#039;&#039;&lt;br /&gt;
|Compute&lt;br /&gt;
|Compute resources are fundamental to cloud infrastructure, delivering the computational power required for running applications and services. They facilitate scalable and efficient environments that dynamically adjust to varying workloads, thus enhancing resource utilization and performance while minimizing costs. &lt;br /&gt;
|-&lt;br /&gt;
|Storage&lt;br /&gt;
|Storage is essential in cloud infrastructure, providing data persistence, management, and accessibility. It includes block storage for databases, object storage for unstructured data, and file storage for shared access applications. Advanced technologies like SSDs and distributed file systems ensure scalability, reliability, and performance. &lt;br /&gt;
|-&lt;br /&gt;
|Networking&lt;br /&gt;
|Hardware networking resources in a cloud edge location include routers, switches, load balancers, and firewalls. These components form the backbone of data center connectivity and inter-server communication. Network Interface Cards (NICs) in servers enable high-throughput connections to the virtual network. WAN gateways and edge routers extend connectivity to external networks, supporting hybrid cloud and remote access scenarios. All hardware is managed centrally through SDN controllers and scaled dynamically to support edge cloud service demands. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Infrastructure Manager&lt;br /&gt;
|A Hardware Infrastructure Manager (also known as Data Center Infrastructure Management system, DCIM4) is a management component designed to monitor, measure, and manage the IT equipment and infrastructure within a cloud edge data center. It encompasses the following key aspects: Monitoring and Management: it provides real-time monitoring of data center operations, including power usage, cooling efficiency, and physical security. This helps in optimizing the performance and efficiency of the data center. Documentation and Planning: it maintains detailed documentation of the data center&#039;s physical and virtual assets. This includes layout planning, capacity management, and future expansion plans. Risk Management: By continuously monitoring environmental conditions and equipment status, it helps in identifying potential risks and mitigating them before they lead to failures. Integration with IT Systems: It integrates with other IT management systems to provide a holistic view of the data center&#039;s operations, facilitating better decision-making and resource allocation. Sustainability and Compliance: It supports sustainability goals by optimizing energy usage and ensuring compliance with industry standards and regulations. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge locations and the configuration and availability of physical hardware resources in each one of them (for instance, number of servers per location, type of servers, type of NIC cards available per location, cost of resources, energy consumption of resources, ...) in order to help take decisions on workload placement and resource lifecycle management. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|&#039;&#039;&#039;Physical Network Resources&#039;&#039;&#039;&lt;br /&gt;
|It includes all the physical hardware resources required to implement the Cloud Edge Continuum (compute, storage and networking). It is closely connected to the physical network infrastructure that supports communication among the computing nodes in the continuum and the connectivity of users to that continuum.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=245</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=245"/>
		<updated>2025-10-22T13:03:28Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Layers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Application Layer&lt;br /&gt;
|-&lt;br /&gt;
|Application Designer&lt;br /&gt;
|This component enables developers to design, create and customize applications using intuitive interfaces or predefined templates. It facilitates rapid development, integration and delivery of tailored applications using automated CI/CD practices (DevOps). The Application Designer facilitates the description of the application in terms of: set of application components it is made of and how they are connected (service function chain), runtime environment each of the application components will require, including the set of functions/services to support its execution, the attributes that may allow the selection of the computing node to host it (hardware requirements, latency, privacy, etc. ). &lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|The Application Packager supports the packaging of applications for its deployment in the Cloud-Edge continuum. It facilitates the automation of application deployment and update (DevOps, both traditional and AI-assisted), providing an integrated toolkit that enables quick, secure and innovative ways to deploy cloud-aware applications. It also provides tools for automatic verification and validation (CV/CT) of the application and its supply chain before its final packaging. &lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|This component provides the interface to invoke and use the applications contained in the catalog. It checks the identity and authenticates the user and checks his authorization to use the application before providing access to it. &lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|It tracks application usage and execution, monitors the performance and identifies abnormal behavior and suboptimal use of resources. &lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|It implements a directory of applications and functions that the providers have made available. They contain the characteristics of the application and the environment it requires for its execution (runtime, services, hardware characteristics). &lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|This component implements the accounting of application usage and provides online charging information for the customer to track application expenditure in real-time. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Data Layer&lt;br /&gt;
|-&lt;br /&gt;
|Data Pipelines&lt;br /&gt;
|This component provides the functionality for data collection, including the connectors to integrate with the data sources and the capabilities for data curation and pre-processing that ensure its quality and readiness for analytics, insight generation, training, modelling or inferencing phases. &lt;br /&gt;
|-&lt;br /&gt;
|Data Modelling&lt;br /&gt;
|This component enables data cataloguing to enable exposure and discovery at scale to easily search, find and browse data, over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Exposure&lt;br /&gt;
|The Data Exposure component provides customers with standard mechanisms and interfaces for safe and controlled access to data. It includes capabilities for making data offers and contract data acquisition, identity checking and data access authentication and authorization. &lt;br /&gt;
|-&lt;br /&gt;
|Data Policy Control&lt;br /&gt;
|Data Policy Control sets the required policies for data sharing, providing a safe, controlled and regulation-compliant environment for data exchange. It allows the data owner to manage the permissions to access its data: who can make it, at which conditions and for which purposes. &lt;br /&gt;
|-&lt;br /&gt;
|Data Catalog&lt;br /&gt;
|Data Catalog provides efficient storage and indexing of data to facilitate browsing, searching and finding data over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Federation&lt;br /&gt;
|Data Federation enables standard mechanisms and interfaces (connectors) for partnering in the provision of datasets, providing a unified view of data catalogs and databases from multiple data providers. This component enables real-time data exchange across companies using data mesh principles, connecting distributed and heterogeneous actors over the cloud-edge continuum, keeping data owners in full control of their data. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Data Federation capabilities should be designed consistently with the other federation capabilities described in this document.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|AI Layer&lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Training&lt;br /&gt;
|This component facilitates the dynamic and adjustable training of AI models across cloud and edge environments, ensuring scalability, reduced latency, and optimized resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Inference&lt;br /&gt;
|The Inference components facilitate real-time deployment and execution of trained AI models on edge devices with efficient synchronization with the cloud for updates, monitoring, and enhancements. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Agent Manager&lt;br /&gt;
|The Cloud-Edge Agent Manager enables the deployment and management of agents and agentic workflows on edge and hybrid edge-cloud deployments creating an agentic mesh. &lt;br /&gt;
|-&lt;br /&gt;
|AI Model Catalog&lt;br /&gt;
|This component contains trained foundational models: LLMs, SLMs, multimodal LLMs, in multiple languages and managing multiple data types: text, images, video, code, etc. These models provide support for Natural Language Processing (NLP), Machine Translation (MT), speech processing, text analysis, information extraction, summarization or text and speech generation. They can be fine-tuned and adapted to specific use cases, using techniques like RAG, model quantization, pruning or distillation. The catalog contains multilingual and multimodal LLMs tailored to diverse EU languages, capable of understanding and processing diverse data types, including text, images, and multimedia. These models address the scarcity of generative AI solutions in non-English languages, ensuring semantic precision, completeness, and compliance with the AI Act. &lt;br /&gt;
|-&lt;br /&gt;
|Federated Learning&lt;br /&gt;
|AI workloads can be split across multiple nodes with central orchestration for scalability and efficiency (Distributed AI). AI Federation enables autonomous nodes to collaborate securely, ensuring privacy and sovereignty. Together, they balance task-sharing efficiency with autonomy. In distributed AI training, the AI model is generated at a central point based on the combination of models produced by different training agents distributed across a ecosystem of federated AI service providers or owners. The distributed training agents work locally on local datasets reducing the need to transfer data to a central location for training. This component allows to use and orchestrate AI resources across multiple providers to collaboratively perform a specific machine learning training task. It leverages a federated network of AI capabilities geographically distributed across the multi-provider Cloud Edge Continuum, enabling seamless resource sharing and scaling while maintaining sovereignty and compliance. It ensures efficient distribution of AI computational workloads, minimizes data movement, and facilitates parallel model training without requiring centralized data aggregation, thus preserving data privacy and autonomy while enhancing overall system performance. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Federated Learning capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|AI Explainability&lt;br /&gt;
|This Explainable AI component ensures transparency by providing interpretable insights into AI decision-making processes. It supports compliance, accountability, and trust by enabling users and regulators to understand, audit, and validate AI models while respecting privacy and data sovereignty. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;5&amp;quot;|Service Orchestration&lt;br /&gt;
|-&lt;br /&gt;
|Service Orchestrator&lt;br /&gt;
|Service orchestration assures efficient tasks execution, load balancing and real-time operations. For example, it could communicate with the Multi-Cloud Orchestrator that manages the virtualized infrastructure layer offering a single unified environment for application development and monitoring. This allows applications and services to be deployed seamlessly across multiple platforms, optimizing resource allocation and reducing operational complexity. Alternatively, the Service Orchestrator may directly or indirectly interact with the underlying capabilities of the cloud platform or virtualization management layer to orchestrate workload execution.  The Service Orchestrator automates application and tenant deployment, and lifecycle management processes. By automating workflows (or service function chains), orchestration ensures that services communicate efficiently across the cloud-edge continuum. &lt;br /&gt;
|-&lt;br /&gt;
|Application Performance Management&lt;br /&gt;
|It monitors the performance and resource consumption of the application or service and communicates deviations from set thresholds or SLAs to the Service Orchestrator for this to take actions to recover a state that meets application requirements. It provides a unified view of states, including logging, monitoring, and alerting, for effective real-time application management and validation at runtime. &lt;br /&gt;
|-&lt;br /&gt;
|Application Repository&lt;br /&gt;
|This component tracks the applications and services that have been deployed and their configuration, the locations where the application and service components are installed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Service Federation&lt;br /&gt;
|This component interconnects the Service Orchestrator with those of other federated providers, enabling the deployment and execution of applications (service function chains) across multiple providers in a seamless way, interacting with a single provider. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Service Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;12&amp;quot;|Cloud Edge Platform&lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cloud Orchestrator (PaaS)&lt;br /&gt;
|Multi-Cloud Orchestrator delivers a Platform as a Service (PaaS) service. A PaaS provides a complete application development and deployment environment in the cloud. With PaaS, customers can build, test, deploy, manage, and update applications quickly and efficiently, without worrying about the underlying infrastructure. It receives from the Service Orchestrator a request to deploy (or manage the lifecycle of) a certain application together with a descriptor (resource model) that defines the state the application needs for its execution (including runtime environment, services, data, application image and other attributes like area of service, performance...). MCO processes the state and takes actions to set it up and preserve it, by updating, upgrading or removing workloads and services, or rescaling or releasing resources. MCO works in close relationship with other components (PIM, VIP, MCM, Serverless orchestrator) to provide the virtual runtime environment defined for the application, the specific combination of bare metal, virtual machine, containers, serverless mechanisms it has been developed to run on, using the technologies over which it has been tested and certified. MCO also deploys and manages the lifecycle of essential tools and services such as middleware, development frameworks, databases, and business analytics, enabling organizations to streamline application development and drive innovation. A PaaS, managed by the MCO, offers scalability, high availability, and reduced time-to-market, allowing developers to focus on coding and application functionality while the MCO supports with infrastructure, security, and operational aspects. Based on certain attributes, like area of service and performance, the MCO may select the location(s) where to deploy the workload and the resources (physical and virtual) required at those location(s) to meet the desired state. This decision on application placement can also follow sustainability and privacy requirements. The MCO deploys the workload once the necessary resources are available, using the Workload Deployment Manager. The MCO also updates and removes workloads, rescaling or releasing the corresponding resources. This MCO description shows a decomposition of the functionality of a cloud-edge continuum workload management solution that may be implemented in many ways, combining or excluding some of its components in order to fit specific sector needs. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Connectivity Manager&lt;br /&gt;
|The Cloud Edge Connectivity Manager (CEC) implements and modifies the service function chain, or removes it, totally or partially, following the requests from a Service Orchestrator, to guarantee the connectivity between workloads that will enable the service delivery and the connectivity from the service user to the workloads implementing the service front-end. Connectivity is usually based on overlay and underlay components in each domain crossed by the traffic (e.g. WAN, data centers, etc.). The CEC manages the networking in the data center domain through the virtualization managers (VIM, CISM) or via specific NaaS interfaces. It manages the WAN connectivity using Cloud Networking services (via transport SDN Controllers) for the connection of different computing nodes. In addition, the CEC manages the complexity deriving from the need to ensure consistency between overlay and underlay networking solutions (for example adapting the networking between the data center fabric and the AAN connectivity). &lt;br /&gt;
|-&lt;br /&gt;
|Physical Infrastructure Manager&lt;br /&gt;
|The Physical Infrastructure Manager (PIM) monitors and manages a pool of physical resources (CPUs, storage, networking), and selects and prepares them (with the corresponding OS and necessary software) to allocate these resources to a virtual machine or container cluster. The PIM provides multiple physical infrastructure management functions, including physical resource provisioning and lifecycle management, physical resource inventory management or physical resource performance management. &lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cluster Manager&lt;br /&gt;
|The Multi-Cluster Manager (MCM) creates and configures container clusters both over bare metal and over virtual machines after a request from the MCO, offering a single interface to manage infrastructure from multiple providers and with multiple K8s distributions. The MCM provides open connectors/APIs to interact with the resources and k8s distributions offered by different providers (private &amp;amp; public) for cluster creation, configuration and monitoring, and keeps track of their evolution. The MCM may create a K8s cluster on bare metal (cluster nodes are servers) or on virtualization stack (cluster nodes are VMs), interacting with PIM or VIP respectively. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Platform Manager&lt;br /&gt;
|The Virtual Infrastructure Platform Manager (VIP) creates virtual machine clusters across several locations using the resources allocated by the PIM. The VIP is required when the service component to be deployed is a Virtualized Application or a Containerized Application, which runs over container clusters that make use of VMs (virtual machines). This component works on infrastructure and technology from different providers, enabling the Cloud Edge continuum to run on a diverse set of different virtualization solutions (VIMs, CISMs or any other future virtualization technology). &lt;br /&gt;
|-&lt;br /&gt;
|Workload Deployment Manager&lt;br /&gt;
|The Workload Deployment Manager (WDM) deploys software package(s) on top of an existing cluster(s) following MCO requests. It exposes a single interface to deploy software packages (i.e. via a helm chart or resource model declaration) on any K8s cluster (or alike) based on any distribution. The WDM provides the connectors/APIs to interact with existing clusters in different locations &amp;amp; technologies (K8s distributions) for application deployment and lifecycle management. This component can also deploy software packages directly on virtual machines (IaaS). &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Federation&lt;br /&gt;
|This component interconnects the Multi-Cloud Orchestrator with the ones of other federated providers, enabling the customer the use of cloud edge computing services (IaaS, CaaS, PaaS, Serverless, NaaS...) across multiple providers in a seamless way, interacting with a single provider. This platform federation provides seamless integration and collaboration between multiple cloud platform providers, enabling interoperability, resource sharing, and unified lifecycle management. Shared resources may exist on all layers of cloud architecture. By adopting standardized protocols and interfaces, platform federation facilitates enhanced scalability, efficiency, and innovation across different cloud environments while maintaining autonomy and security for each participating entity. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Cloud Edge Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Access Control&lt;br /&gt;
|This component implements a key aspect in terms of security in the management of cloud edge infrastructure, a role-based access control that ensures proper access rights and security across the infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Resource Repository&lt;br /&gt;
|This component keeps a record of the resources available in each of the edge locations, the virtualization platforms available and the configuration. The information in this repository helps the multi-cloud orchestrator to select the right location(s) to deploy workloads. &lt;br /&gt;
|-&lt;br /&gt;
|Workload Inventory&lt;br /&gt;
|This component keeps record of the workloads that have been deployed and their configuration, as well as information about the location and cluster where they have been deployed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Serverless Orchestrator (FaaS)&lt;br /&gt;
|The Serverless Orchestrator provides Serverless capabilities, also known as Function as a Service (FaaS), is a cloud computing model that allows developers to build and deploy applications in the form of individual functions, which are executed in response to specific events or triggers. This model eliminates the need to manage server infrastructure, enabling developers to focus solely on writing code. Each function runs in a stateless container, automatically scaling with demand and only consuming resources when invoked, leading to cost savings and efficient resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Virtualization&lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Manager (BMaaS)&lt;br /&gt;
|The Hardware Resource Manager component delivers a Bare Metal as a Service (BMaaS) service. BMaaS is an abstraction that provides physical, non-virtualized hardware resources directly to users, offering dedicated servers, storage, and networking components without any virtualization layer. This service allows to harness the full power of the hardware for applications, resulting in higher performance, predictable latency, and complete control over the environment. BMaaS is particularly beneficial for workloads that require intensive computation, low-latency networking, or compliance with specific hardware configurations.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Manager (IaaS)&lt;br /&gt;
|The Virtual Infrastructure Manager (VIM) component provides Infrastructure as a Service (IaaS) service. IaaS is a cloud computing model that provides virtualized computing resources. IaaS delivers essential services such as virtual machines, storage, and networks. Users can provision, scale, and manage the resources dynamically according to their needs, while the cloud provider takes care of maintaining the underlying hardware, networking, and security. This model offers high flexibility, enabling organizations to quickly deploy and run applications and services, test new solutions, and handle varying workloads with ease, ultimately driving innovation and operational efficiency. &lt;br /&gt;
|-&lt;br /&gt;
|Container Infrastructure Service Manager (CaaS)&lt;br /&gt;
|The Container Infrastructure Service Manager (CISM) component provides a Container as a Service (CaaS) service. CaaS is a cloud service model that provides a platform allowing users to manage and deploy containerized applications and workloads. By leveraging container orchestration tools such as Kubernetes, CaaS facilitates the automation of container deployment, scaling, and operations, ensuring high availability and performance. This model abstracts the underlying infrastructure complexities, enabling developers and IT teams to focus on application and service development and deployment without worrying about the maintenance of the physical or virtual infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Access Control&lt;br /&gt;
|As in the Cloud Edge Platform layer, this Access Control component implements virtual infrastructure management security, a role-based access control that ensures proper access rights and security for virtual resource management.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge sites and the configuration and availability of virtual resources in each one of them (for instance, number of k8s clusters available per site, CPU/memory available per k8s cluster, number of virtual CPUs that are available to setup new k8s clusters, ...) in order to help take decisions on workload placement. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Network Systems, SDN controllers&lt;br /&gt;
|It provides the capabilities to manage physical and virtualized/cloudified networking elements to build network services in the geographically distributed Cloud Edge Continuum.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;5&amp;quot;|Virtualization&lt;br /&gt;
|Compute&lt;br /&gt;
|Compute resources are fundamental to cloud infrastructure, delivering the computational power required for running applications and services. They facilitate scalable and efficient environments that dynamically adjust to varying workloads, thus enhancing resource utilization and performance while minimizing costs. &lt;br /&gt;
|-&lt;br /&gt;
|Storage&lt;br /&gt;
|Storage is essential in cloud infrastructure, providing data persistence, management, and accessibility. It includes block storage for databases, object storage for unstructured data, and file storage for shared access applications. Advanced technologies like SSDs and distributed file systems ensure scalability, reliability, and performance. &lt;br /&gt;
|-&lt;br /&gt;
|Networking&lt;br /&gt;
|Hardware networking resources in a cloud edge location include routers, switches, load balancers, and firewalls. These components form the backbone of data center connectivity and inter-server communication. Network Interface Cards (NICs) in servers enable high-throughput connections to the virtual network. WAN gateways and edge routers extend connectivity to external networks, supporting hybrid cloud and remote access scenarios. All hardware is managed centrally through SDN controllers and scaled dynamically to support edge cloud service demands. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Infrastructure Manager&lt;br /&gt;
|A Hardware Infrastructure Manager (also known as Data Center Infrastructure Management system, DCIM4) is a management component designed to monitor, measure, and manage the IT equipment and infrastructure within a cloud edge data center. It encompasses the following key aspects: Monitoring and Management: it provides real-time monitoring of data center operations, including power usage, cooling efficiency, and physical security. This helps in optimizing the performance and efficiency of the data center. Documentation and Planning: it maintains detailed documentation of the data center&#039;s physical and virtual assets. This includes layout planning, capacity management, and future expansion plans. Risk Management: By continuously monitoring environmental conditions and equipment status, it helps in identifying potential risks and mitigating them before they lead to failures. Integration with IT Systems: It integrates with other IT management systems to provide a holistic view of the data center&#039;s operations, facilitating better decision-making and resource allocation. Sustainability and Compliance: It supports sustainability goals by optimizing energy usage and ensuring compliance with industry standards and regulations. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge locations and the configuration and availability of physical hardware resources in each one of them (for instance, number of servers per location, type of servers, type of NIC cards available per location, cost of resources, energy consumption of resources, ...) in order to help take decisions on workload placement and resource lifecycle management. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Physical Network Resources&lt;br /&gt;
|It includes all the physical hardware resources required to implement the Cloud Edge Continuum (compute, storage and networking). It is closely connected to the physical network infrastructure that supports communication among the computing nodes in the continuum and the connectivity of users to that continuum.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=244</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=244"/>
		<updated>2025-10-22T13:02:02Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Layers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Application Layer&lt;br /&gt;
|-&lt;br /&gt;
|Application Designer&lt;br /&gt;
|This component enables developers to design, create and customize applications using intuitive interfaces or predefined templates. It facilitates rapid development, integration and delivery of tailored applications using automated CI/CD practices (DevOps). The Application Designer facilitates the description of the application in terms of: set of application components it is made of and how they are connected (service function chain), runtime environment each of the application components will require, including the set of functions/services to support its execution, the attributes that may allow the selection of the computing node to host it (hardware requirements, latency, privacy, etc. ). &lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|The Application Packager supports the packaging of applications for its deployment in the Cloud-Edge continuum. It facilitates the automation of application deployment and update (DevOps, both traditional and AI-assisted), providing an integrated toolkit that enables quick, secure and innovative ways to deploy cloud-aware applications. It also provides tools for automatic verification and validation (CV/CT) of the application and its supply chain before its final packaging. &lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|This component provides the interface to invoke and use the applications contained in the catalog. It checks the identity and authenticates the user and checks his authorization to use the application before providing access to it. &lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|It tracks application usage and execution, monitors the performance and identifies abnormal behavior and suboptimal use of resources. &lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|It implements a directory of applications and functions that the providers have made available. They contain the characteristics of the application and the environment it requires for its execution (runtime, services, hardware characteristics). &lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|This component implements the accounting of application usage and provides online charging information for the customer to track application expenditure in real-time. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Data Layer&lt;br /&gt;
|-&lt;br /&gt;
|Data Pipelines&lt;br /&gt;
|This component provides the functionality for data collection, including the connectors to integrate with the data sources and the capabilities for data curation and pre-processing that ensure its quality and readiness for analytics, insight generation, training, modelling or inferencing phases. &lt;br /&gt;
|-&lt;br /&gt;
|Data Modelling&lt;br /&gt;
|This component enables data cataloguing to enable exposure and discovery at scale to easily search, find and browse data, over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Exposure&lt;br /&gt;
|The Data Exposure component provides customers with standard mechanisms and interfaces for safe and controlled access to data. It includes capabilities for making data offers and contract data acquisition, identity checking and data access authentication and authorization. &lt;br /&gt;
|-&lt;br /&gt;
|Data Policy Control&lt;br /&gt;
|Data Policy Control sets the required policies for data sharing, providing a safe, controlled and regulation-compliant environment for data exchange. It allows the data owner to manage the permissions to access its data: who can make it, at which conditions and for which purposes. &lt;br /&gt;
|-&lt;br /&gt;
|Data Catalog&lt;br /&gt;
|Data Catalog provides efficient storage and indexing of data to facilitate browsing, searching and finding data over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Federation&lt;br /&gt;
|Data Federation enables standard mechanisms and interfaces (connectors) for partnering in the provision of datasets, providing a unified view of data catalogs and databases from multiple data providers. This component enables real-time data exchange across companies using data mesh principles, connecting distributed and heterogeneous actors over the cloud-edge continuum, keeping data owners in full control of their data. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Data Federation capabilities should be designed consistently with the other federation capabilities described in this document.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|AI Layer&lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Training&lt;br /&gt;
|This component facilitates the dynamic and adjustable training of AI models across cloud and edge environments, ensuring scalability, reduced latency, and optimized resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Inference&lt;br /&gt;
|The Inference components facilitate real-time deployment and execution of trained AI models on edge devices with efficient synchronization with the cloud for updates, monitoring, and enhancements. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Agent Manager&lt;br /&gt;
|The Cloud-Edge Agent Manager enables the deployment and management of agents and agentic workflows on edge and hybrid edge-cloud deployments creating an agentic mesh. &lt;br /&gt;
|-&lt;br /&gt;
|AI Model Catalog&lt;br /&gt;
|This component contains trained foundational models: LLMs, SLMs, multimodal LLMs, in multiple languages and managing multiple data types: text, images, video, code, etc. These models provide support for Natural Language Processing (NLP), Machine Translation (MT), speech processing, text analysis, information extraction, summarization or text and speech generation. They can be fine-tuned and adapted to specific use cases, using techniques like RAG, model quantization, pruning or distillation. The catalog contains multilingual and multimodal LLMs tailored to diverse EU languages, capable of understanding and processing diverse data types, including text, images, and multimedia. These models address the scarcity of generative AI solutions in non-English languages, ensuring semantic precision, completeness, and compliance with the AI Act. &lt;br /&gt;
|-&lt;br /&gt;
|Federated Learning&lt;br /&gt;
|AI workloads can be split across multiple nodes with central orchestration for scalability and efficiency (Distributed AI). AI Federation enables autonomous nodes to collaborate securely, ensuring privacy and sovereignty. Together, they balance task-sharing efficiency with autonomy. In distributed AI training, the AI model is generated at a central point based on the combination of models produced by different training agents distributed across a ecosystem of federated AI service providers or owners. The distributed training agents work locally on local datasets reducing the need to transfer data to a central location for training. This component allows to use and orchestrate AI resources across multiple providers to collaboratively perform a specific machine learning training task. It leverages a federated network of AI capabilities geographically distributed across the multi-provider Cloud Edge Continuum, enabling seamless resource sharing and scaling while maintaining sovereignty and compliance. It ensures efficient distribution of AI computational workloads, minimizes data movement, and facilitates parallel model training without requiring centralized data aggregation, thus preserving data privacy and autonomy while enhancing overall system performance. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Federated Learning capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|AI Explainability&lt;br /&gt;
|This Explainable AI component ensures transparency by providing interpretable insights into AI decision-making processes. It supports compliance, accountability, and trust by enabling users and regulators to understand, audit, and validate AI models while respecting privacy and data sovereignty. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;5&amp;quot;|Service Orchestration&lt;br /&gt;
|-&lt;br /&gt;
|Service Orchestrator&lt;br /&gt;
|Service orchestration assures efficient tasks execution, load balancing and real-time operations. For example, it could communicate with the Multi-Cloud Orchestrator that manages the virtualized infrastructure layer offering a single unified environment for application development and monitoring. This allows applications and services to be deployed seamlessly across multiple platforms, optimizing resource allocation and reducing operational complexity. Alternatively, the Service Orchestrator may directly or indirectly interact with the underlying capabilities of the cloud platform or virtualization management layer to orchestrate workload execution.  The Service Orchestrator automates application and tenant deployment, and lifecycle management processes. By automating workflows (or service function chains), orchestration ensures that services communicate efficiently across the cloud-edge continuum. &lt;br /&gt;
|-&lt;br /&gt;
|Application Performance Management&lt;br /&gt;
|It monitors the performance and resource consumption of the application or service and communicates deviations from set thresholds or SLAs to the Service Orchestrator for this to take actions to recover a state that meets application requirements. It provides a unified view of states, including logging, monitoring, and alerting, for effective real-time application management and validation at runtime. &lt;br /&gt;
|-&lt;br /&gt;
|Application Repository&lt;br /&gt;
|This component tracks the applications and services that have been deployed and their configuration, the locations where the application and service components are installed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Service Federation&lt;br /&gt;
|This component interconnects the Service Orchestrator with those of other federated providers, enabling the deployment and execution of applications (service function chains) across multiple providers in a seamless way, interacting with a single provider. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Service Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;12&amp;quot;|Cloud Edge Platform&lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cloud Orchestrator (PaaS)&lt;br /&gt;
|Multi-Cloud Orchestrator delivers a Platform as a Service (PaaS) service. A PaaS provides a complete application development and deployment environment in the cloud. With PaaS, customers can build, test, deploy, manage, and update applications quickly and efficiently, without worrying about the underlying infrastructure. It receives from the Service Orchestrator a request to deploy (or manage the lifecycle of) a certain application together with a descriptor (resource model) that defines the state the application needs for its execution (including runtime environment, services, data, application image and other attributes like area of service, performance...). MCO processes the state and takes actions to set it up and preserve it, by updating, upgrading or removing workloads and services, or rescaling or releasing resources. MCO works in close relationship with other components (PIM, VIP, MCM, Serverless orchestrator) to provide the virtual runtime environment defined for the application, the specific combination of bare metal, virtual machine, containers, serverless mechanisms it has been developed to run on, using the technologies over which it has been tested and certified. MCO also deploys and manages the lifecycle of essential tools and services such as middleware, development frameworks, databases, and business analytics, enabling organizations to streamline application development and drive innovation. A PaaS, managed by the MCO, offers scalability, high availability, and reduced time-to-market, allowing developers to focus on coding and application functionality while the MCO supports with infrastructure, security, and operational aspects. Based on certain attributes, like area of service and performance, the MCO may select the location(s) where to deploy the workload and the resources (physical and virtual) required at those location(s) to meet the desired state. This decision on application placement can also follow sustainability and privacy requirements. The MCO deploys the workload once the necessary resources are available, using the Workload Deployment Manager. The MCO also updates and removes workloads, rescaling or releasing the corresponding resources. This MCO description shows a decomposition of the functionality of a cloud-edge continuum workload management solution that may be implemented in many ways, combining or excluding some of its components in order to fit specific sector needs. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Connectivity Manager&lt;br /&gt;
|The Cloud Edge Connectivity Manager (CEC) implements and modifies the service function chain, or removes it, totally or partially, following the requests from a Service Orchestrator, to guarantee the connectivity between workloads that will enable the service delivery and the connectivity from the service user to the workloads implementing the service front-end. Connectivity is usually based on overlay and underlay components in each domain crossed by the traffic (e.g. WAN, data centers, etc.). The CEC manages the networking in the data center domain through the virtualization managers (VIM, CISM) or via specific NaaS interfaces. It manages the WAN connectivity using Cloud Networking services (via transport SDN Controllers) for the connection of different computing nodes. In addition, the CEC manages the complexity deriving from the need to ensure consistency between overlay and underlay networking solutions (for example adapting the networking between the data center fabric and the AAN connectivity). &lt;br /&gt;
|-&lt;br /&gt;
|Physical Infrastructure Manager&lt;br /&gt;
|The Physical Infrastructure Manager (PIM) monitors and manages a pool of physical resources (CPUs, storage, networking), and selects and prepares them (with the corresponding OS and necessary software) to allocate these resources to a virtual machine or container cluster. The PIM provides multiple physical infrastructure management functions, including physical resource provisioning and lifecycle management, physical resource inventory management or physical resource performance management. &lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cluster Manager&lt;br /&gt;
|The Multi-Cluster Manager (MCM) creates and configures container clusters both over bare metal and over virtual machines after a request from the MCO, offering a single interface to manage infrastructure from multiple providers and with multiple K8s distributions. The MCM provides open connectors/APIs to interact with the resources and k8s distributions offered by different providers (private &amp;amp; public) for cluster creation, configuration and monitoring, and keeps track of their evolution. The MCM may create a K8s cluster on bare metal (cluster nodes are servers) or on virtualization stack (cluster nodes are VMs), interacting with PIM or VIP respectively. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Platform Manager&lt;br /&gt;
|The Virtual Infrastructure Platform Manager (VIP) creates virtual machine clusters across several locations using the resources allocated by the PIM. The VIP is required when the service component to be deployed is a Virtualized Application or a Containerized Application, which runs over container clusters that make use of VMs (virtual machines). This component works on infrastructure and technology from different providers, enabling the Cloud Edge continuum to run on a diverse set of different virtualization solutions (VIMs, CISMs or any other future virtualization technology). &lt;br /&gt;
|-&lt;br /&gt;
|Workload Deployment Manager&lt;br /&gt;
|The Workload Deployment Manager (WDM) deploys software package(s) on top of an existing cluster(s) following MCO requests. It exposes a single interface to deploy software packages (i.e. via a helm chart or resource model declaration) on any K8s cluster (or alike) based on any distribution. The WDM provides the connectors/APIs to interact with existing clusters in different locations &amp;amp; technologies (K8s distributions) for application deployment and lifecycle management. This component can also deploy software packages directly on virtual machines (IaaS). &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Federation&lt;br /&gt;
|This component interconnects the Multi-Cloud Orchestrator with the ones of other federated providers, enabling the customer the use of cloud edge computing services (IaaS, CaaS, PaaS, Serverless, NaaS...) across multiple providers in a seamless way, interacting with a single provider. This platform federation provides seamless integration and collaboration between multiple cloud platform providers, enabling interoperability, resource sharing, and unified lifecycle management. Shared resources may exist on all layers of cloud architecture. By adopting standardized protocols and interfaces, platform federation facilitates enhanced scalability, efficiency, and innovation across different cloud environments while maintaining autonomy and security for each participating entity. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Cloud Edge Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Access Control&lt;br /&gt;
|This component implements a key aspect in terms of security in the management of cloud edge infrastructure, a role-based access control that ensures proper access rights and security across the infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Resource Repository&lt;br /&gt;
|This component keeps a record of the resources available in each of the edge locations, the virtualization platforms available and the configuration. The information in this repository helps the multi-cloud orchestrator to select the right location(s) to deploy workloads. &lt;br /&gt;
|-&lt;br /&gt;
|Workload Inventory&lt;br /&gt;
|This component keeps record of the workloads that have been deployed and their configuration, as well as information about the location and cluster where they have been deployed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Serverless Orchestrator (FaaS)&lt;br /&gt;
|The Serverless Orchestrator provides Serverless capabilities, also known as Function as a Service (FaaS), is a cloud computing model that allows developers to build and deploy applications in the form of individual functions, which are executed in response to specific events or triggers. This model eliminates the need to manage server infrastructure, enabling developers to focus solely on writing code. Each function runs in a stateless container, automatically scaling with demand and only consuming resources when invoked, leading to cost savings and efficient resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Virtualization&lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Manager (BMaaS)&lt;br /&gt;
|The Hardware Resource Manager component delivers a Bare Metal as a Service (BMaaS) service. BMaaS is an abstraction that provides physical, non-virtualized hardware resources directly to users, offering dedicated servers, storage, and networking components without any virtualization layer. This service allows to harness the full power of the hardware for applications, resulting in higher performance, predictable latency, and complete control over the environment. BMaaS is particularly beneficial for workloads that require intensive computation, low-latency networking, or compliance with specific hardware configurations.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Manager (IaaS)&lt;br /&gt;
|The Virtual Infrastructure Manager (VIM) component provides Infrastructure as a Service (IaaS) service. IaaS is a cloud computing model that provides virtualized computing resources. IaaS delivers essential services such as virtual machines, storage, and networks. Users can provision, scale, and manage the resources dynamically according to their needs, while the cloud provider takes care of maintaining the underlying hardware, networking, and security. This model offers high flexibility, enabling organizations to quickly deploy and run applications and services, test new solutions, and handle varying workloads with ease, ultimately driving innovation and operational efficiency. &lt;br /&gt;
|-&lt;br /&gt;
|Container Infrastructure Service Manager (CaaS)&lt;br /&gt;
|The Container Infrastructure Service Manager (CISM) component provides a Container as a Service (CaaS) service. CaaS is a cloud service model that provides a platform allowing users to manage and deploy containerized applications and workloads. By leveraging container orchestration tools such as Kubernetes, CaaS facilitates the automation of container deployment, scaling, and operations, ensuring high availability and performance. This model abstracts the underlying infrastructure complexities, enabling developers and IT teams to focus on application and service development and deployment without worrying about the maintenance of the physical or virtual infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Access Control&lt;br /&gt;
|As in the Cloud Edge Platform layer, this Access Control component implements virtual infrastructure management security, a role-based access control that ensures proper access rights and security for virtual resource management.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge sites and the configuration and availability of virtual resources in each one of them (for instance, number of k8s clusters available per site, CPU/memory available per k8s cluster, number of virtual CPUs that are available to setup new k8s clusters, ...) in order to help take decisions on workload placement. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Network Systems, SDN controllers&lt;br /&gt;
|It provides the capabilities to manage physical and virtualized/cloudified networking elements to build network services in the geographically distributed Cloud Edge Continuum.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Virtualization&lt;br /&gt;
|Compute&lt;br /&gt;
|Compute resources are fundamental to cloud infrastructure, delivering the computational power required for running applications and services. They facilitate scalable and efficient environments that dynamically adjust to varying workloads, thus enhancing resource utilization and performance while minimizing costs. &lt;br /&gt;
|-&lt;br /&gt;
|Storage&lt;br /&gt;
|Storage is essential in cloud infrastructure, providing data persistence, management, and accessibility. It includes block storage for databases, object storage for unstructured data, and file storage for shared access applications. Advanced technologies like SSDs and distributed file systems ensure scalability, reliability, and performance. &lt;br /&gt;
|-&lt;br /&gt;
|Networking&lt;br /&gt;
|Hardware networking resources in a cloud edge location include routers, switches, load balancers, and firewalls. These components form the backbone of data center connectivity and inter-server communication. Network Interface Cards (NICs) in servers enable high-throughput connections to the virtual network. WAN gateways and edge routers extend connectivity to external networks, supporting hybrid cloud and remote access scenarios. All hardware is managed centrally through SDN controllers and scaled dynamically to support edge cloud service demands. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Infrastructure Manager&lt;br /&gt;
|A Hardware Infrastructure Manager (also known as Data Center Infrastructure Management system, DCIM4) is a management component designed to monitor, measure, and manage the IT equipment and infrastructure within a cloud edge data center. It encompasses the following key aspects: Monitoring and Management: it provides real-time monitoring of data center operations, including power usage, cooling efficiency, and physical security. This helps in optimizing the performance and efficiency of the data center. Documentation and Planning: it maintains detailed documentation of the data center&#039;s physical and virtual assets. This includes layout planning, capacity management, and future expansion plans. Risk Management: By continuously monitoring environmental conditions and equipment status, it helps in identifying potential risks and mitigating them before they lead to failures. Integration with IT Systems: It integrates with other IT management systems to provide a holistic view of the data center&#039;s operations, facilitating better decision-making and resource allocation. Sustainability and Compliance: It supports sustainability goals by optimizing energy usage and ensuring compliance with industry standards and regulations. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge locations and the configuration and availability of physical hardware resources in each one of them (for instance, number of servers per location, type of servers, type of NIC cards available per location, cost of resources, energy consumption of resources, ...) in order to help take decisions on workload placement and resource lifecycle management. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Physical Network Resources&lt;br /&gt;
|It includes all the physical hardware resources required to implement the Cloud Edge Continuum (compute, storage and networking). It is closely connected to the physical network infrastructure that supports communication among the computing nodes in the continuum and the connectivity of users to that continuum.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=243</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=243"/>
		<updated>2025-10-22T13:01:21Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Layers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Application Layer&lt;br /&gt;
|-&lt;br /&gt;
|Application Designer&lt;br /&gt;
|This component enables developers to design, create and customize applications using intuitive interfaces or predefined templates. It facilitates rapid development, integration and delivery of tailored applications using automated CI/CD practices (DevOps). The Application Designer facilitates the description of the application in terms of: set of application components it is made of and how they are connected (service function chain), runtime environment each of the application components will require, including the set of functions/services to support its execution, the attributes that may allow the selection of the computing node to host it (hardware requirements, latency, privacy, etc. ). &lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|The Application Packager supports the packaging of applications for its deployment in the Cloud-Edge continuum. It facilitates the automation of application deployment and update (DevOps, both traditional and AI-assisted), providing an integrated toolkit that enables quick, secure and innovative ways to deploy cloud-aware applications. It also provides tools for automatic verification and validation (CV/CT) of the application and its supply chain before its final packaging. &lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|This component provides the interface to invoke and use the applications contained in the catalog. It checks the identity and authenticates the user and checks his authorization to use the application before providing access to it. &lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|It tracks application usage and execution, monitors the performance and identifies abnormal behavior and suboptimal use of resources. &lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|It implements a directory of applications and functions that the providers have made available. They contain the characteristics of the application and the environment it requires for its execution (runtime, services, hardware characteristics). &lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|This component implements the accounting of application usage and provides online charging information for the customer to track application expenditure in real-time. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Data Layer&lt;br /&gt;
|-&lt;br /&gt;
|Data Pipelines&lt;br /&gt;
|This component provides the functionality for data collection, including the connectors to integrate with the data sources and the capabilities for data curation and pre-processing that ensure its quality and readiness for analytics, insight generation, training, modelling or inferencing phases. &lt;br /&gt;
|-&lt;br /&gt;
|Data Modelling&lt;br /&gt;
|This component enables data cataloguing to enable exposure and discovery at scale to easily search, find and browse data, over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Exposure&lt;br /&gt;
|The Data Exposure component provides customers with standard mechanisms and interfaces for safe and controlled access to data. It includes capabilities for making data offers and contract data acquisition, identity checking and data access authentication and authorization. &lt;br /&gt;
|-&lt;br /&gt;
|Data Policy Control&lt;br /&gt;
|Data Policy Control sets the required policies for data sharing, providing a safe, controlled and regulation-compliant environment for data exchange. It allows the data owner to manage the permissions to access its data: who can make it, at which conditions and for which purposes. &lt;br /&gt;
|-&lt;br /&gt;
|Data Catalog&lt;br /&gt;
|Data Catalog provides efficient storage and indexing of data to facilitate browsing, searching and finding data over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Federation&lt;br /&gt;
|Data Federation enables standard mechanisms and interfaces (connectors) for partnering in the provision of datasets, providing a unified view of data catalogs and databases from multiple data providers. This component enables real-time data exchange across companies using data mesh principles, connecting distributed and heterogeneous actors over the cloud-edge continuum, keeping data owners in full control of their data. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Data Federation capabilities should be designed consistently with the other federation capabilities described in this document.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|AI Layer&lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Training&lt;br /&gt;
|This component facilitates the dynamic and adjustable training of AI models across cloud and edge environments, ensuring scalability, reduced latency, and optimized resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Inference&lt;br /&gt;
|The Inference components facilitate real-time deployment and execution of trained AI models on edge devices with efficient synchronization with the cloud for updates, monitoring, and enhancements. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Agent Manager&lt;br /&gt;
|The Cloud-Edge Agent Manager enables the deployment and management of agents and agentic workflows on edge and hybrid edge-cloud deployments creating an agentic mesh. &lt;br /&gt;
|-&lt;br /&gt;
|AI Model Catalog&lt;br /&gt;
|This component contains trained foundational models: LLMs, SLMs, multimodal LLMs, in multiple languages and managing multiple data types: text, images, video, code, etc. These models provide support for Natural Language Processing (NLP), Machine Translation (MT), speech processing, text analysis, information extraction, summarization or text and speech generation. They can be fine-tuned and adapted to specific use cases, using techniques like RAG, model quantization, pruning or distillation. The catalog contains multilingual and multimodal LLMs tailored to diverse EU languages, capable of understanding and processing diverse data types, including text, images, and multimedia. These models address the scarcity of generative AI solutions in non-English languages, ensuring semantic precision, completeness, and compliance with the AI Act. &lt;br /&gt;
|-&lt;br /&gt;
|Federated Learning&lt;br /&gt;
|AI workloads can be split across multiple nodes with central orchestration for scalability and efficiency (Distributed AI). AI Federation enables autonomous nodes to collaborate securely, ensuring privacy and sovereignty. Together, they balance task-sharing efficiency with autonomy. In distributed AI training, the AI model is generated at a central point based on the combination of models produced by different training agents distributed across a ecosystem of federated AI service providers or owners. The distributed training agents work locally on local datasets reducing the need to transfer data to a central location for training. This component allows to use and orchestrate AI resources across multiple providers to collaboratively perform a specific machine learning training task. It leverages a federated network of AI capabilities geographically distributed across the multi-provider Cloud Edge Continuum, enabling seamless resource sharing and scaling while maintaining sovereignty and compliance. It ensures efficient distribution of AI computational workloads, minimizes data movement, and facilitates parallel model training without requiring centralized data aggregation, thus preserving data privacy and autonomy while enhancing overall system performance. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Federated Learning capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|AI Explainability&lt;br /&gt;
|This Explainable AI component ensures transparency by providing interpretable insights into AI decision-making processes. It supports compliance, accountability, and trust by enabling users and regulators to understand, audit, and validate AI models while respecting privacy and data sovereignty. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;5&amp;quot;|Service Orchestration&lt;br /&gt;
|-&lt;br /&gt;
|Service Orchestrator&lt;br /&gt;
|Service orchestration assures efficient tasks execution, load balancing and real-time operations. For example, it could communicate with the Multi-Cloud Orchestrator that manages the virtualized infrastructure layer offering a single unified environment for application development and monitoring. This allows applications and services to be deployed seamlessly across multiple platforms, optimizing resource allocation and reducing operational complexity. Alternatively, the Service Orchestrator may directly or indirectly interact with the underlying capabilities of the cloud platform or virtualization management layer to orchestrate workload execution.  The Service Orchestrator automates application and tenant deployment, and lifecycle management processes. By automating workflows (or service function chains), orchestration ensures that services communicate efficiently across the cloud-edge continuum. &lt;br /&gt;
|-&lt;br /&gt;
|Application Performance Management&lt;br /&gt;
|It monitors the performance and resource consumption of the application or service and communicates deviations from set thresholds or SLAs to the Service Orchestrator for this to take actions to recover a state that meets application requirements. It provides a unified view of states, including logging, monitoring, and alerting, for effective real-time application management and validation at runtime. &lt;br /&gt;
|-&lt;br /&gt;
|Application Repository&lt;br /&gt;
|This component tracks the applications and services that have been deployed and their configuration, the locations where the application and service components are installed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Service Federation&lt;br /&gt;
|This component interconnects the Service Orchestrator with those of other federated providers, enabling the deployment and execution of applications (service function chains) across multiple providers in a seamless way, interacting with a single provider. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Service Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;12&amp;quot;|Cloud Edge Platform&lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cloud Orchestrator (PaaS)&lt;br /&gt;
|Multi-Cloud Orchestrator delivers a Platform as a Service (PaaS) service. A PaaS provides a complete application development and deployment environment in the cloud. With PaaS, customers can build, test, deploy, manage, and update applications quickly and efficiently, without worrying about the underlying infrastructure. It receives from the Service Orchestrator a request to deploy (or manage the lifecycle of) a certain application together with a descriptor (resource model) that defines the state the application needs for its execution (including runtime environment, services, data, application image and other attributes like area of service, performance...). MCO processes the state and takes actions to set it up and preserve it, by updating, upgrading or removing workloads and services, or rescaling or releasing resources. MCO works in close relationship with other components (PIM, VIP, MCM, Serverless orchestrator) to provide the virtual runtime environment defined for the application, the specific combination of bare metal, virtual machine, containers, serverless mechanisms it has been developed to run on, using the technologies over which it has been tested and certified. MCO also deploys and manages the lifecycle of essential tools and services such as middleware, development frameworks, databases, and business analytics, enabling organizations to streamline application development and drive innovation. A PaaS, managed by the MCO, offers scalability, high availability, and reduced time-to-market, allowing developers to focus on coding and application functionality while the MCO supports with infrastructure, security, and operational aspects. Based on certain attributes, like area of service and performance, the MCO may select the location(s) where to deploy the workload and the resources (physical and virtual) required at those location(s) to meet the desired state. This decision on application placement can also follow sustainability and privacy requirements. The MCO deploys the workload once the necessary resources are available, using the Workload Deployment Manager. The MCO also updates and removes workloads, rescaling or releasing the corresponding resources. This MCO description shows a decomposition of the functionality of a cloud-edge continuum workload management solution that may be implemented in many ways, combining or excluding some of its components in order to fit specific sector needs. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Connectivity Manager&lt;br /&gt;
|The Cloud Edge Connectivity Manager (CEC) implements and modifies the service function chain, or removes it, totally or partially, following the requests from a Service Orchestrator, to guarantee the connectivity between workloads that will enable the service delivery and the connectivity from the service user to the workloads implementing the service front-end. Connectivity is usually based on overlay and underlay components in each domain crossed by the traffic (e.g. WAN, data centers, etc.). The CEC manages the networking in the data center domain through the virtualization managers (VIM, CISM) or via specific NaaS interfaces. It manages the WAN connectivity using Cloud Networking services (via transport SDN Controllers) for the connection of different computing nodes. In addition, the CEC manages the complexity deriving from the need to ensure consistency between overlay and underlay networking solutions (for example adapting the networking between the data center fabric and the AAN connectivity). &lt;br /&gt;
|-&lt;br /&gt;
|Physical Infrastructure Manager&lt;br /&gt;
|The Physical Infrastructure Manager (PIM) monitors and manages a pool of physical resources (CPUs, storage, networking), and selects and prepares them (with the corresponding OS and necessary software) to allocate these resources to a virtual machine or container cluster. The PIM provides multiple physical infrastructure management functions, including physical resource provisioning and lifecycle management, physical resource inventory management or physical resource performance management. &lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cluster Manager&lt;br /&gt;
|The Multi-Cluster Manager (MCM) creates and configures container clusters both over bare metal and over virtual machines after a request from the MCO, offering a single interface to manage infrastructure from multiple providers and with multiple K8s distributions. The MCM provides open connectors/APIs to interact with the resources and k8s distributions offered by different providers (private &amp;amp; public) for cluster creation, configuration and monitoring, and keeps track of their evolution. The MCM may create a K8s cluster on bare metal (cluster nodes are servers) or on virtualization stack (cluster nodes are VMs), interacting with PIM or VIP respectively. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Platform Manager&lt;br /&gt;
|The Virtual Infrastructure Platform Manager (VIP) creates virtual machine clusters across several locations using the resources allocated by the PIM. The VIP is required when the service component to be deployed is a Virtualized Application or a Containerized Application, which runs over container clusters that make use of VMs (virtual machines). This component works on infrastructure and technology from different providers, enabling the Cloud Edge continuum to run on a diverse set of different virtualization solutions (VIMs, CISMs or any other future virtualization technology). &lt;br /&gt;
|-&lt;br /&gt;
|Workload Deployment Manager&lt;br /&gt;
|The Workload Deployment Manager (WDM) deploys software package(s) on top of an existing cluster(s) following MCO requests. It exposes a single interface to deploy software packages (i.e. via a helm chart or resource model declaration) on any K8s cluster (or alike) based on any distribution. The WDM provides the connectors/APIs to interact with existing clusters in different locations &amp;amp; technologies (K8s distributions) for application deployment and lifecycle management. This component can also deploy software packages directly on virtual machines (IaaS). &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Federation&lt;br /&gt;
|This component interconnects the Multi-Cloud Orchestrator with the ones of other federated providers, enabling the customer the use of cloud edge computing services (IaaS, CaaS, PaaS, Serverless, NaaS...) across multiple providers in a seamless way, interacting with a single provider. This platform federation provides seamless integration and collaboration between multiple cloud platform providers, enabling interoperability, resource sharing, and unified lifecycle management. Shared resources may exist on all layers of cloud architecture. By adopting standardized protocols and interfaces, platform federation facilitates enhanced scalability, efficiency, and innovation across different cloud environments while maintaining autonomy and security for each participating entity. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Cloud Edge Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Access Control&lt;br /&gt;
|This component implements a key aspect in terms of security in the management of cloud edge infrastructure, a role-based access control that ensures proper access rights and security across the infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Resource Repository&lt;br /&gt;
|This component keeps a record of the resources available in each of the edge locations, the virtualization platforms available and the configuration. The information in this repository helps the multi-cloud orchestrator to select the right location(s) to deploy workloads. &lt;br /&gt;
|-&lt;br /&gt;
|Workload Inventory&lt;br /&gt;
|This component keeps record of the workloads that have been deployed and their configuration, as well as information about the location and cluster where they have been deployed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Serverless Orchestrator (FaaS)&lt;br /&gt;
|The Serverless Orchestrator provides Serverless capabilities, also known as Function as a Service (FaaS), is a cloud computing model that allows developers to build and deploy applications in the form of individual functions, which are executed in response to specific events or triggers. This model eliminates the need to manage server infrastructure, enabling developers to focus solely on writing code. Each function runs in a stateless container, automatically scaling with demand and only consuming resources when invoked, leading to cost savings and efficient resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Virtualization&lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Manager (BMaaS)&lt;br /&gt;
|The Hardware Resource Manager component delivers a Bare Metal as a Service (BMaaS) service. BMaaS is an abstraction that provides physical, non-virtualized hardware resources directly to users, offering dedicated servers, storage, and networking components without any virtualization layer. This service allows to harness the full power of the hardware for applications, resulting in higher performance, predictable latency, and complete control over the environment. BMaaS is particularly beneficial for workloads that require intensive computation, low-latency networking, or compliance with specific hardware configurations.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Manager (IaaS)&lt;br /&gt;
|The Virtual Infrastructure Manager (VIM) component provides Infrastructure as a Service (IaaS) service. IaaS is a cloud computing model that provides virtualized computing resources. IaaS delivers essential services such as virtual machines, storage, and networks. Users can provision, scale, and manage the resources dynamically according to their needs, while the cloud provider takes care of maintaining the underlying hardware, networking, and security. This model offers high flexibility, enabling organizations to quickly deploy and run applications and services, test new solutions, and handle varying workloads with ease, ultimately driving innovation and operational efficiency. &lt;br /&gt;
|-&lt;br /&gt;
|Container Infrastructure Service Manager (CaaS)&lt;br /&gt;
|The Container Infrastructure Service Manager (CISM) component provides a Container as a Service (CaaS) service. CaaS is a cloud service model that provides a platform allowing users to manage and deploy containerized applications and workloads. By leveraging container orchestration tools such as Kubernetes, CaaS facilitates the automation of container deployment, scaling, and operations, ensuring high availability and performance. This model abstracts the underlying infrastructure complexities, enabling developers and IT teams to focus on application and service development and deployment without worrying about the maintenance of the physical or virtual infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Access Control&lt;br /&gt;
|As in the Cloud Edge Platform layer, this Access Control component implements virtual infrastructure management security, a role-based access control that ensures proper access rights and security for virtual resource management.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge sites and the configuration and availability of virtual resources in each one of them (for instance, number of k8s clusters available per site, CPU/memory available per k8s cluster, number of virtual CPUs that are available to setup new k8s clusters, ...) in order to help take decisions on workload placement. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Network Systems, SDN controllers&lt;br /&gt;
|It provides the capabilities to manage physical and virtualized/cloudified networking elements to build network services in the geographically distributed Cloud Edge Continuum.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Virtualization&lt;br /&gt;
|Compute&lt;br /&gt;
|Compute resources are fundamental to cloud infrastructure, delivering the computational power required for running applications and services. They facilitate scalable and efficient environments that dynamically adjust to varying workloads, thus enhancing resource utilization and performance while minimizing costs. &lt;br /&gt;
|-&lt;br /&gt;
|Storage&lt;br /&gt;
|Storage is essential in cloud infrastructure, providing data persistence, management, and accessibility. It includes block storage for databases, object storage for unstructured data, and file storage for shared access applications. Advanced technologies like SSDs and distributed file systems ensure scalability, reliability, and performance. &lt;br /&gt;
|-&lt;br /&gt;
|Networking&lt;br /&gt;
|Hardware networking resources in a cloud edge location include routers, switches, load balancers, and firewalls. These components form the backbone of data center connectivity and inter-server communication. Network Interface Cards (NICs) in servers enable high-throughput connections to the virtual network. WAN gateways and edge routers extend connectivity to external networks, supporting hybrid cloud and remote access scenarios. All hardware is managed centrally through SDN controllers and scaled dynamically to support edge cloud service demands. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Infrastructure Manager&lt;br /&gt;
|A Hardware Infrastructure Manager (also known as Data Center Infrastructure Management system, DCIM4) is a management component designed to monitor, measure, and manage the IT equipment and infrastructure within a cloud edge data center. It encompasses the following key aspects: Monitoring and Management: it provides real-time monitoring of data center operations, including power usage, cooling efficiency, and physical security. This helps in optimizing the performance and efficiency of the data center. Documentation and Planning: it maintains detailed documentation of the data center&#039;s physical and virtual assets. This includes layout planning, capacity management, and future expansion plans. Risk Management: By continuously monitoring environmental conditions and equipment status, it helps in identifying potential risks and mitigating them before they lead to failures. Integration with IT Systems: It integrates with other IT management systems to provide a holistic view of the data center&#039;s operations, facilitating better decision-making and resource allocation. Sustainability and Compliance: It supports sustainability goals by optimizing energy usage and ensuring compliance with industry standards and regulations. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge locations and the configuration and availability of physical hardware resources in each one of them (for instance, number of servers per location, type of servers, type of NIC cards available per location, cost of resources, energy consumption of resources, ...) in order to help take decisions on workload placement and resource lifecycle management. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;&lt;br /&gt;
|Physical Network Resources&lt;br /&gt;
|It includes all the physical hardware resources required to implement the Cloud Edge Continuum (compute, storage and networking). It is closely connected to the physical network infrastructure that supports communication among the computing nodes in the continuum and the connectivity of users to that continuum.&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=242</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=242"/>
		<updated>2025-10-22T12:59:25Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Layers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Application Layer&lt;br /&gt;
|-&lt;br /&gt;
|Application Designer&lt;br /&gt;
|This component enables developers to design, create and customize applications using intuitive interfaces or predefined templates. It facilitates rapid development, integration and delivery of tailored applications using automated CI/CD practices (DevOps). The Application Designer facilitates the description of the application in terms of: set of application components it is made of and how they are connected (service function chain), runtime environment each of the application components will require, including the set of functions/services to support its execution, the attributes that may allow the selection of the computing node to host it (hardware requirements, latency, privacy, etc. ). &lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|The Application Packager supports the packaging of applications for its deployment in the Cloud-Edge continuum. It facilitates the automation of application deployment and update (DevOps, both traditional and AI-assisted), providing an integrated toolkit that enables quick, secure and innovative ways to deploy cloud-aware applications. It also provides tools for automatic verification and validation (CV/CT) of the application and its supply chain before its final packaging. &lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|This component provides the interface to invoke and use the applications contained in the catalog. It checks the identity and authenticates the user and checks his authorization to use the application before providing access to it. &lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|It tracks application usage and execution, monitors the performance and identifies abnormal behavior and suboptimal use of resources. &lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|It implements a directory of applications and functions that the providers have made available. They contain the characteristics of the application and the environment it requires for its execution (runtime, services, hardware characteristics). &lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|This component implements the accounting of application usage and provides online charging information for the customer to track application expenditure in real-time. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Data Layer&lt;br /&gt;
|-&lt;br /&gt;
|Data Pipelines&lt;br /&gt;
|This component provides the functionality for data collection, including the connectors to integrate with the data sources and the capabilities for data curation and pre-processing that ensure its quality and readiness for analytics, insight generation, training, modelling or inferencing phases. &lt;br /&gt;
|-&lt;br /&gt;
|Data Modelling&lt;br /&gt;
|This component enables data cataloguing to enable exposure and discovery at scale to easily search, find and browse data, over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Exposure&lt;br /&gt;
|The Data Exposure component provides customers with standard mechanisms and interfaces for safe and controlled access to data. It includes capabilities for making data offers and contract data acquisition, identity checking and data access authentication and authorization. &lt;br /&gt;
|-&lt;br /&gt;
|Data Policy Control&lt;br /&gt;
|Data Policy Control sets the required policies for data sharing, providing a safe, controlled and regulation-compliant environment for data exchange. It allows the data owner to manage the permissions to access its data: who can make it, at which conditions and for which purposes. &lt;br /&gt;
|-&lt;br /&gt;
|Data Catalog&lt;br /&gt;
|Data Catalog provides efficient storage and indexing of data to facilitate browsing, searching and finding data over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Federation&lt;br /&gt;
|Data Federation enables standard mechanisms and interfaces (connectors) for partnering in the provision of datasets, providing a unified view of data catalogs and databases from multiple data providers. This component enables real-time data exchange across companies using data mesh principles, connecting distributed and heterogeneous actors over the cloud-edge continuum, keeping data owners in full control of their data. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Data Federation capabilities should be designed consistently with the other federation capabilities described in this document.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|AI Layer&lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Training&lt;br /&gt;
|This component facilitates the dynamic and adjustable training of AI models across cloud and edge environments, ensuring scalability, reduced latency, and optimized resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Inference&lt;br /&gt;
|The Inference components facilitate real-time deployment and execution of trained AI models on edge devices with efficient synchronization with the cloud for updates, monitoring, and enhancements. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Agent Manager&lt;br /&gt;
|The Cloud-Edge Agent Manager enables the deployment and management of agents and agentic workflows on edge and hybrid edge-cloud deployments creating an agentic mesh. &lt;br /&gt;
|-&lt;br /&gt;
|AI Model Catalog&lt;br /&gt;
|This component contains trained foundational models: LLMs, SLMs, multimodal LLMs, in multiple languages and managing multiple data types: text, images, video, code, etc. These models provide support for Natural Language Processing (NLP), Machine Translation (MT), speech processing, text analysis, information extraction, summarization or text and speech generation. They can be fine-tuned and adapted to specific use cases, using techniques like RAG, model quantization, pruning or distillation. The catalog contains multilingual and multimodal LLMs tailored to diverse EU languages, capable of understanding and processing diverse data types, including text, images, and multimedia. These models address the scarcity of generative AI solutions in non-English languages, ensuring semantic precision, completeness, and compliance with the AI Act. &lt;br /&gt;
|-&lt;br /&gt;
|Federated Learning&lt;br /&gt;
|AI workloads can be split across multiple nodes with central orchestration for scalability and efficiency (Distributed AI). AI Federation enables autonomous nodes to collaborate securely, ensuring privacy and sovereignty. Together, they balance task-sharing efficiency with autonomy. In distributed AI training, the AI model is generated at a central point based on the combination of models produced by different training agents distributed across a ecosystem of federated AI service providers or owners. The distributed training agents work locally on local datasets reducing the need to transfer data to a central location for training. This component allows to use and orchestrate AI resources across multiple providers to collaboratively perform a specific machine learning training task. It leverages a federated network of AI capabilities geographically distributed across the multi-provider Cloud Edge Continuum, enabling seamless resource sharing and scaling while maintaining sovereignty and compliance. It ensures efficient distribution of AI computational workloads, minimizes data movement, and facilitates parallel model training without requiring centralized data aggregation, thus preserving data privacy and autonomy while enhancing overall system performance. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Federated Learning capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|AI Explainability&lt;br /&gt;
|This Explainable AI component ensures transparency by providing interpretable insights into AI decision-making processes. It supports compliance, accountability, and trust by enabling users and regulators to understand, audit, and validate AI models while respecting privacy and data sovereignty. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;5&amp;quot;|Service Orchestration&lt;br /&gt;
|-&lt;br /&gt;
|Service Orchestrator&lt;br /&gt;
|Service orchestration assures efficient tasks execution, load balancing and real-time operations. For example, it could communicate with the Multi-Cloud Orchestrator that manages the virtualized infrastructure layer offering a single unified environment for application development and monitoring. This allows applications and services to be deployed seamlessly across multiple platforms, optimizing resource allocation and reducing operational complexity. Alternatively, the Service Orchestrator may directly or indirectly interact with the underlying capabilities of the cloud platform or virtualization management layer to orchestrate workload execution.  The Service Orchestrator automates application and tenant deployment, and lifecycle management processes. By automating workflows (or service function chains), orchestration ensures that services communicate efficiently across the cloud-edge continuum. &lt;br /&gt;
|-&lt;br /&gt;
|Application Performance Management&lt;br /&gt;
|It monitors the performance and resource consumption of the application or service and communicates deviations from set thresholds or SLAs to the Service Orchestrator for this to take actions to recover a state that meets application requirements. It provides a unified view of states, including logging, monitoring, and alerting, for effective real-time application management and validation at runtime. &lt;br /&gt;
|-&lt;br /&gt;
|Application Repository&lt;br /&gt;
|This component tracks the applications and services that have been deployed and their configuration, the locations where the application and service components are installed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Service Federation&lt;br /&gt;
|This component interconnects the Service Orchestrator with those of other federated providers, enabling the deployment and execution of applications (service function chains) across multiple providers in a seamless way, interacting with a single provider. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Service Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;12&amp;quot;|Cloud Edge Platform&lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cloud Orchestrator (PaaS)&lt;br /&gt;
|Multi-Cloud Orchestrator delivers a Platform as a Service (PaaS) service. A PaaS provides a complete application development and deployment environment in the cloud. With PaaS, customers can build, test, deploy, manage, and update applications quickly and efficiently, without worrying about the underlying infrastructure. It receives from the Service Orchestrator a request to deploy (or manage the lifecycle of) a certain application together with a descriptor (resource model) that defines the state the application needs for its execution (including runtime environment, services, data, application image and other attributes like area of service, performance...). MCO processes the state and takes actions to set it up and preserve it, by updating, upgrading or removing workloads and services, or rescaling or releasing resources. MCO works in close relationship with other components (PIM, VIP, MCM, Serverless orchestrator) to provide the virtual runtime environment defined for the application, the specific combination of bare metal, virtual machine, containers, serverless mechanisms it has been developed to run on, using the technologies over which it has been tested and certified. MCO also deploys and manages the lifecycle of essential tools and services such as middleware, development frameworks, databases, and business analytics, enabling organizations to streamline application development and drive innovation. A PaaS, managed by the MCO, offers scalability, high availability, and reduced time-to-market, allowing developers to focus on coding and application functionality while the MCO supports with infrastructure, security, and operational aspects. Based on certain attributes, like area of service and performance, the MCO may select the location(s) where to deploy the workload and the resources (physical and virtual) required at those location(s) to meet the desired state. This decision on application placement can also follow sustainability and privacy requirements. The MCO deploys the workload once the necessary resources are available, using the Workload Deployment Manager. The MCO also updates and removes workloads, rescaling or releasing the corresponding resources. This MCO description shows a decomposition of the functionality of a cloud-edge continuum workload management solution that may be implemented in many ways, combining or excluding some of its components in order to fit specific sector needs. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Connectivity Manager&lt;br /&gt;
|The Cloud Edge Connectivity Manager (CEC) implements and modifies the service function chain, or removes it, totally or partially, following the requests from a Service Orchestrator, to guarantee the connectivity between workloads that will enable the service delivery and the connectivity from the service user to the workloads implementing the service front-end. Connectivity is usually based on overlay and underlay components in each domain crossed by the traffic (e.g. WAN, data centers, etc.). The CEC manages the networking in the data center domain through the virtualization managers (VIM, CISM) or via specific NaaS interfaces. It manages the WAN connectivity using Cloud Networking services (via transport SDN Controllers) for the connection of different computing nodes. In addition, the CEC manages the complexity deriving from the need to ensure consistency between overlay and underlay networking solutions (for example adapting the networking between the data center fabric and the AAN connectivity). &lt;br /&gt;
|-&lt;br /&gt;
|Physical Infrastructure Manager&lt;br /&gt;
|The Physical Infrastructure Manager (PIM) monitors and manages a pool of physical resources (CPUs, storage, networking), and selects and prepares them (with the corresponding OS and necessary software) to allocate these resources to a virtual machine or container cluster. The PIM provides multiple physical infrastructure management functions, including physical resource provisioning and lifecycle management, physical resource inventory management or physical resource performance management. &lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cluster Manager&lt;br /&gt;
|The Multi-Cluster Manager (MCM) creates and configures container clusters both over bare metal and over virtual machines after a request from the MCO, offering a single interface to manage infrastructure from multiple providers and with multiple K8s distributions. The MCM provides open connectors/APIs to interact with the resources and k8s distributions offered by different providers (private &amp;amp; public) for cluster creation, configuration and monitoring, and keeps track of their evolution. The MCM may create a K8s cluster on bare metal (cluster nodes are servers) or on virtualization stack (cluster nodes are VMs), interacting with PIM or VIP respectively. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Platform Manager&lt;br /&gt;
|The Virtual Infrastructure Platform Manager (VIP) creates virtual machine clusters across several locations using the resources allocated by the PIM. The VIP is required when the service component to be deployed is a Virtualized Application or a Containerized Application, which runs over container clusters that make use of VMs (virtual machines). This component works on infrastructure and technology from different providers, enabling the Cloud Edge continuum to run on a diverse set of different virtualization solutions (VIMs, CISMs or any other future virtualization technology). &lt;br /&gt;
|-&lt;br /&gt;
|Workload Deployment Manager&lt;br /&gt;
|The Workload Deployment Manager (WDM) deploys software package(s) on top of an existing cluster(s) following MCO requests. It exposes a single interface to deploy software packages (i.e. via a helm chart or resource model declaration) on any K8s cluster (or alike) based on any distribution. The WDM provides the connectors/APIs to interact with existing clusters in different locations &amp;amp; technologies (K8s distributions) for application deployment and lifecycle management. This component can also deploy software packages directly on virtual machines (IaaS). &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Federation&lt;br /&gt;
|This component interconnects the Multi-Cloud Orchestrator with the ones of other federated providers, enabling the customer the use of cloud edge computing services (IaaS, CaaS, PaaS, Serverless, NaaS...) across multiple providers in a seamless way, interacting with a single provider. This platform federation provides seamless integration and collaboration between multiple cloud platform providers, enabling interoperability, resource sharing, and unified lifecycle management. Shared resources may exist on all layers of cloud architecture. By adopting standardized protocols and interfaces, platform federation facilitates enhanced scalability, efficiency, and innovation across different cloud environments while maintaining autonomy and security for each participating entity. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Cloud Edge Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Access Control&lt;br /&gt;
|This component implements a key aspect in terms of security in the management of cloud edge infrastructure, a role-based access control that ensures proper access rights and security across the infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Resource Repository&lt;br /&gt;
|This component keeps a record of the resources available in each of the edge locations, the virtualization platforms available and the configuration. The information in this repository helps the multi-cloud orchestrator to select the right location(s) to deploy workloads. &lt;br /&gt;
|-&lt;br /&gt;
|Workload Inventory&lt;br /&gt;
|This component keeps record of the workloads that have been deployed and their configuration, as well as information about the location and cluster where they have been deployed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Serverless Orchestrator (FaaS)&lt;br /&gt;
|The Serverless Orchestrator provides Serverless capabilities, also known as Function as a Service (FaaS), is a cloud computing model that allows developers to build and deploy applications in the form of individual functions, which are executed in response to specific events or triggers. This model eliminates the need to manage server infrastructure, enabling developers to focus solely on writing code. Each function runs in a stateless container, automatically scaling with demand and only consuming resources when invoked, leading to cost savings and efficient resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Virtualization&lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Manager (BMaaS)&lt;br /&gt;
|The Hardware Resource Manager component delivers a Bare Metal as a Service (BMaaS) service. BMaaS is an abstraction that provides physical, non-virtualized hardware resources directly to users, offering dedicated servers, storage, and networking components without any virtualization layer. This service allows to harness the full power of the hardware for applications, resulting in higher performance, predictable latency, and complete control over the environment. BMaaS is particularly beneficial for workloads that require intensive computation, low-latency networking, or compliance with specific hardware configurations.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Manager (IaaS)&lt;br /&gt;
|The Virtual Infrastructure Manager (VIM) component provides Infrastructure as a Service (IaaS) service. IaaS is a cloud computing model that provides virtualized computing resources. IaaS delivers essential services such as virtual machines, storage, and networks. Users can provision, scale, and manage the resources dynamically according to their needs, while the cloud provider takes care of maintaining the underlying hardware, networking, and security. This model offers high flexibility, enabling organizations to quickly deploy and run applications and services, test new solutions, and handle varying workloads with ease, ultimately driving innovation and operational efficiency. &lt;br /&gt;
|-&lt;br /&gt;
|Container Infrastructure Service Manager (CaaS)&lt;br /&gt;
|The Container Infrastructure Service Manager (CISM) component provides a Container as a Service (CaaS) service. CaaS is a cloud service model that provides a platform allowing users to manage and deploy containerized applications and workloads. By leveraging container orchestration tools such as Kubernetes, CaaS facilitates the automation of container deployment, scaling, and operations, ensuring high availability and performance. This model abstracts the underlying infrastructure complexities, enabling developers and IT teams to focus on application and service development and deployment without worrying about the maintenance of the physical or virtual infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Access Control&lt;br /&gt;
|As in the Cloud Edge Platform layer, this Access Control component implements virtual infrastructure management security, a role-based access control that ensures proper access rights and security for virtual resource management.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge sites and the configuration and availability of virtual resources in each one of them (for instance, number of k8s clusters available per site, CPU/memory available per k8s cluster, number of virtual CPUs that are available to setup new k8s clusters, ...) in order to help take decisions on workload placement. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Network Systems, SDN controllers&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Virtualization&lt;br /&gt;
|Compute&lt;br /&gt;
|Compute resources are fundamental to cloud infrastructure, delivering the computational power required for running applications and services. They facilitate scalable and efficient environments that dynamically adjust to varying workloads, thus enhancing resource utilization and performance while minimizing costs. &lt;br /&gt;
|-&lt;br /&gt;
|Storage&lt;br /&gt;
|Storage is essential in cloud infrastructure, providing data persistence, management, and accessibility. It includes block storage for databases, object storage for unstructured data, and file storage for shared access applications. Advanced technologies like SSDs and distributed file systems ensure scalability, reliability, and performance. &lt;br /&gt;
|-&lt;br /&gt;
|Networking&lt;br /&gt;
|Hardware networking resources in a cloud edge location include routers, switches, load balancers, and firewalls. These components form the backbone of data center connectivity and inter-server communication. Network Interface Cards (NICs) in servers enable high-throughput connections to the virtual network. WAN gateways and edge routers extend connectivity to external networks, supporting hybrid cloud and remote access scenarios. All hardware is managed centrally through SDN controllers and scaled dynamically to support edge cloud service demands. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Infrastructure Manager&lt;br /&gt;
|A Hardware Infrastructure Manager (also known as Data Center Infrastructure Management system, DCIM4) is a management component designed to monitor, measure, and manage the IT equipment and infrastructure within a cloud edge data center. It encompasses the following key aspects: Monitoring and Management: it provides real-time monitoring of data center operations, including power usage, cooling efficiency, and physical security. This helps in optimizing the performance and efficiency of the data center. Documentation and Planning: it maintains detailed documentation of the data center&#039;s physical and virtual assets. This includes layout planning, capacity management, and future expansion plans. Risk Management: By continuously monitoring environmental conditions and equipment status, it helps in identifying potential risks and mitigating them before they lead to failures. Integration with IT Systems: It integrates with other IT management systems to provide a holistic view of the data center&#039;s operations, facilitating better decision-making and resource allocation. Sustainability and Compliance: It supports sustainability goals by optimizing energy usage and ensuring compliance with industry standards and regulations. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge locations and the configuration and availability of physical hardware resources in each one of them (for instance, number of servers per location, type of servers, type of NIC cards available per location, cost of resources, energy consumption of resources, ...) in order to help take decisions on workload placement and resource lifecycle management. &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=241</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=241"/>
		<updated>2025-10-22T12:57:35Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Layers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Application Layer&lt;br /&gt;
|-&lt;br /&gt;
|Application Designer&lt;br /&gt;
|This component enables developers to design, create and customize applications using intuitive interfaces or predefined templates. It facilitates rapid development, integration and delivery of tailored applications using automated CI/CD practices (DevOps). The Application Designer facilitates the description of the application in terms of: set of application components it is made of and how they are connected (service function chain), runtime environment each of the application components will require, including the set of functions/services to support its execution, the attributes that may allow the selection of the computing node to host it (hardware requirements, latency, privacy, etc. ). &lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|The Application Packager supports the packaging of applications for its deployment in the Cloud-Edge continuum. It facilitates the automation of application deployment and update (DevOps, both traditional and AI-assisted), providing an integrated toolkit that enables quick, secure and innovative ways to deploy cloud-aware applications. It also provides tools for automatic verification and validation (CV/CT) of the application and its supply chain before its final packaging. &lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|This component provides the interface to invoke and use the applications contained in the catalog. It checks the identity and authenticates the user and checks his authorization to use the application before providing access to it. &lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|It tracks application usage and execution, monitors the performance and identifies abnormal behavior and suboptimal use of resources. &lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|It implements a directory of applications and functions that the providers have made available. They contain the characteristics of the application and the environment it requires for its execution (runtime, services, hardware characteristics). &lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|This component implements the accounting of application usage and provides online charging information for the customer to track application expenditure in real-time. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Data Layer&lt;br /&gt;
|-&lt;br /&gt;
|Data Pipelines&lt;br /&gt;
|This component provides the functionality for data collection, including the connectors to integrate with the data sources and the capabilities for data curation and pre-processing that ensure its quality and readiness for analytics, insight generation, training, modelling or inferencing phases. &lt;br /&gt;
|-&lt;br /&gt;
|Data Modelling&lt;br /&gt;
|This component enables data cataloguing to enable exposure and discovery at scale to easily search, find and browse data, over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Exposure&lt;br /&gt;
|The Data Exposure component provides customers with standard mechanisms and interfaces for safe and controlled access to data. It includes capabilities for making data offers and contract data acquisition, identity checking and data access authentication and authorization. &lt;br /&gt;
|-&lt;br /&gt;
|Data Policy Control&lt;br /&gt;
|Data Policy Control sets the required policies for data sharing, providing a safe, controlled and regulation-compliant environment for data exchange. It allows the data owner to manage the permissions to access its data: who can make it, at which conditions and for which purposes. &lt;br /&gt;
|-&lt;br /&gt;
|Data Catalog&lt;br /&gt;
|Data Catalog provides efficient storage and indexing of data to facilitate browsing, searching and finding data over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Federation&lt;br /&gt;
|Data Federation enables standard mechanisms and interfaces (connectors) for partnering in the provision of datasets, providing a unified view of data catalogs and databases from multiple data providers. This component enables real-time data exchange across companies using data mesh principles, connecting distributed and heterogeneous actors over the cloud-edge continuum, keeping data owners in full control of their data. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Data Federation capabilities should be designed consistently with the other federation capabilities described in this document.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|AI Layer&lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Training&lt;br /&gt;
|This component facilitates the dynamic and adjustable training of AI models across cloud and edge environments, ensuring scalability, reduced latency, and optimized resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Inference&lt;br /&gt;
|The Inference components facilitate real-time deployment and execution of trained AI models on edge devices with efficient synchronization with the cloud for updates, monitoring, and enhancements. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Agent Manager&lt;br /&gt;
|The Cloud-Edge Agent Manager enables the deployment and management of agents and agentic workflows on edge and hybrid edge-cloud deployments creating an agentic mesh. &lt;br /&gt;
|-&lt;br /&gt;
|AI Model Catalog&lt;br /&gt;
|This component contains trained foundational models: LLMs, SLMs, multimodal LLMs, in multiple languages and managing multiple data types: text, images, video, code, etc. These models provide support for Natural Language Processing (NLP), Machine Translation (MT), speech processing, text analysis, information extraction, summarization or text and speech generation. They can be fine-tuned and adapted to specific use cases, using techniques like RAG, model quantization, pruning or distillation. The catalog contains multilingual and multimodal LLMs tailored to diverse EU languages, capable of understanding and processing diverse data types, including text, images, and multimedia. These models address the scarcity of generative AI solutions in non-English languages, ensuring semantic precision, completeness, and compliance with the AI Act. &lt;br /&gt;
|-&lt;br /&gt;
|Federated Learning&lt;br /&gt;
|AI workloads can be split across multiple nodes with central orchestration for scalability and efficiency (Distributed AI). AI Federation enables autonomous nodes to collaborate securely, ensuring privacy and sovereignty. Together, they balance task-sharing efficiency with autonomy. In distributed AI training, the AI model is generated at a central point based on the combination of models produced by different training agents distributed across a ecosystem of federated AI service providers or owners. The distributed training agents work locally on local datasets reducing the need to transfer data to a central location for training. This component allows to use and orchestrate AI resources across multiple providers to collaboratively perform a specific machine learning training task. It leverages a federated network of AI capabilities geographically distributed across the multi-provider Cloud Edge Continuum, enabling seamless resource sharing and scaling while maintaining sovereignty and compliance. It ensures efficient distribution of AI computational workloads, minimizes data movement, and facilitates parallel model training without requiring centralized data aggregation, thus preserving data privacy and autonomy while enhancing overall system performance. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Federated Learning capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|AI Explainability&lt;br /&gt;
|This Explainable AI component ensures transparency by providing interpretable insights into AI decision-making processes. It supports compliance, accountability, and trust by enabling users and regulators to understand, audit, and validate AI models while respecting privacy and data sovereignty. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Service Orchestration&lt;br /&gt;
|-&lt;br /&gt;
|Service Orchestrator&lt;br /&gt;
|Service orchestration assures efficient tasks execution, load balancing and real-time operations. For example, it could communicate with the Multi-Cloud Orchestrator that manages the virtualized infrastructure layer offering a single unified environment for application development and monitoring. This allows applications and services to be deployed seamlessly across multiple platforms, optimizing resource allocation and reducing operational complexity. Alternatively, the Service Orchestrator may directly or indirectly interact with the underlying capabilities of the cloud platform or virtualization management layer to orchestrate workload execution.  The Service Orchestrator automates application and tenant deployment, and lifecycle management processes. By automating workflows (or service function chains), orchestration ensures that services communicate efficiently across the cloud-edge continuum. &lt;br /&gt;
|-&lt;br /&gt;
|Application Performance Management&lt;br /&gt;
|It monitors the performance and resource consumption of the application or service and communicates deviations from set thresholds or SLAs to the Service Orchestrator for this to take actions to recover a state that meets application requirements. It provides a unified view of states, including logging, monitoring, and alerting, for effective real-time application management and validation at runtime. &lt;br /&gt;
|-&lt;br /&gt;
|Application Repository&lt;br /&gt;
|This component tracks the applications and services that have been deployed and their configuration, the locations where the application and service components are installed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Service Federation&lt;br /&gt;
|This component interconnects the Service Orchestrator with those of other federated providers, enabling the deployment and execution of applications (service function chains) across multiple providers in a seamless way, interacting with a single provider. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Service Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;12&amp;quot;|Cloud Edge Platform&lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cloud Orchestrator (PaaS)&lt;br /&gt;
|Multi-Cloud Orchestrator delivers a Platform as a Service (PaaS) service. A PaaS provides a complete application development and deployment environment in the cloud. With PaaS, customers can build, test, deploy, manage, and update applications quickly and efficiently, without worrying about the underlying infrastructure. It receives from the Service Orchestrator a request to deploy (or manage the lifecycle of) a certain application together with a descriptor (resource model) that defines the state the application needs for its execution (including runtime environment, services, data, application image and other attributes like area of service, performance...). MCO processes the state and takes actions to set it up and preserve it, by updating, upgrading or removing workloads and services, or rescaling or releasing resources. MCO works in close relationship with other components (PIM, VIP, MCM, Serverless orchestrator) to provide the virtual runtime environment defined for the application, the specific combination of bare metal, virtual machine, containers, serverless mechanisms it has been developed to run on, using the technologies over which it has been tested and certified. MCO also deploys and manages the lifecycle of essential tools and services such as middleware, development frameworks, databases, and business analytics, enabling organizations to streamline application development and drive innovation. A PaaS, managed by the MCO, offers scalability, high availability, and reduced time-to-market, allowing developers to focus on coding and application functionality while the MCO supports with infrastructure, security, and operational aspects. Based on certain attributes, like area of service and performance, the MCO may select the location(s) where to deploy the workload and the resources (physical and virtual) required at those location(s) to meet the desired state. This decision on application placement can also follow sustainability and privacy requirements. The MCO deploys the workload once the necessary resources are available, using the Workload Deployment Manager. The MCO also updates and removes workloads, rescaling or releasing the corresponding resources. This MCO description shows a decomposition of the functionality of a cloud-edge continuum workload management solution that may be implemented in many ways, combining or excluding some of its components in order to fit specific sector needs. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Connectivity Manager&lt;br /&gt;
|The Cloud Edge Connectivity Manager (CEC) implements and modifies the service function chain, or removes it, totally or partially, following the requests from a Service Orchestrator, to guarantee the connectivity between workloads that will enable the service delivery and the connectivity from the service user to the workloads implementing the service front-end. Connectivity is usually based on overlay and underlay components in each domain crossed by the traffic (e.g. WAN, data centers, etc.). The CEC manages the networking in the data center domain through the virtualization managers (VIM, CISM) or via specific NaaS interfaces. It manages the WAN connectivity using Cloud Networking services (via transport SDN Controllers) for the connection of different computing nodes. In addition, the CEC manages the complexity deriving from the need to ensure consistency between overlay and underlay networking solutions (for example adapting the networking between the data center fabric and the AAN connectivity). &lt;br /&gt;
|-&lt;br /&gt;
|Physical Infrastructure Manager&lt;br /&gt;
|The Physical Infrastructure Manager (PIM) monitors and manages a pool of physical resources (CPUs, storage, networking), and selects and prepares them (with the corresponding OS and necessary software) to allocate these resources to a virtual machine or container cluster. The PIM provides multiple physical infrastructure management functions, including physical resource provisioning and lifecycle management, physical resource inventory management or physical resource performance management. &lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cluster Manager&lt;br /&gt;
|The Multi-Cluster Manager (MCM) creates and configures container clusters both over bare metal and over virtual machines after a request from the MCO, offering a single interface to manage infrastructure from multiple providers and with multiple K8s distributions. The MCM provides open connectors/APIs to interact with the resources and k8s distributions offered by different providers (private &amp;amp; public) for cluster creation, configuration and monitoring, and keeps track of their evolution. The MCM may create a K8s cluster on bare metal (cluster nodes are servers) or on virtualization stack (cluster nodes are VMs), interacting with PIM or VIP respectively. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Platform Manager&lt;br /&gt;
|The Virtual Infrastructure Platform Manager (VIP) creates virtual machine clusters across several locations using the resources allocated by the PIM. The VIP is required when the service component to be deployed is a Virtualized Application or a Containerized Application, which runs over container clusters that make use of VMs (virtual machines). This component works on infrastructure and technology from different providers, enabling the Cloud Edge continuum to run on a diverse set of different virtualization solutions (VIMs, CISMs or any other future virtualization technology). &lt;br /&gt;
|-&lt;br /&gt;
|Workload Deployment Manager&lt;br /&gt;
|The Workload Deployment Manager (WDM) deploys software package(s) on top of an existing cluster(s) following MCO requests. It exposes a single interface to deploy software packages (i.e. via a helm chart or resource model declaration) on any K8s cluster (or alike) based on any distribution. The WDM provides the connectors/APIs to interact with existing clusters in different locations &amp;amp; technologies (K8s distributions) for application deployment and lifecycle management. This component can also deploy software packages directly on virtual machines (IaaS). &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Federation&lt;br /&gt;
|This component interconnects the Multi-Cloud Orchestrator with the ones of other federated providers, enabling the customer the use of cloud edge computing services (IaaS, CaaS, PaaS, Serverless, NaaS...) across multiple providers in a seamless way, interacting with a single provider. This platform federation provides seamless integration and collaboration between multiple cloud platform providers, enabling interoperability, resource sharing, and unified lifecycle management. Shared resources may exist on all layers of cloud architecture. By adopting standardized protocols and interfaces, platform federation facilitates enhanced scalability, efficiency, and innovation across different cloud environments while maintaining autonomy and security for each participating entity. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Cloud Edge Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Access Control&lt;br /&gt;
|This component implements a key aspect in terms of security in the management of cloud edge infrastructure, a role-based access control that ensures proper access rights and security across the infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Resource Repository&lt;br /&gt;
|This component keeps a record of the resources available in each of the edge locations, the virtualization platforms available and the configuration. The information in this repository helps the multi-cloud orchestrator to select the right location(s) to deploy workloads. &lt;br /&gt;
|-&lt;br /&gt;
|Workload Inventory&lt;br /&gt;
|This component keeps record of the workloads that have been deployed and their configuration, as well as information about the location and cluster where they have been deployed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Serverless Orchestrator (FaaS)&lt;br /&gt;
|The Serverless Orchestrator provides Serverless capabilities, also known as Function as a Service (FaaS), is a cloud computing model that allows developers to build and deploy applications in the form of individual functions, which are executed in response to specific events or triggers. This model eliminates the need to manage server infrastructure, enabling developers to focus solely on writing code. Each function runs in a stateless container, automatically scaling with demand and only consuming resources when invoked, leading to cost savings and efficient resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Virtualization&lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Manager (BMaaS)&lt;br /&gt;
|The Hardware Resource Manager component delivers a Bare Metal as a Service (BMaaS) service. BMaaS is an abstraction that provides physical, non-virtualized hardware resources directly to users, offering dedicated servers, storage, and networking components without any virtualization layer. This service allows to harness the full power of the hardware for applications, resulting in higher performance, predictable latency, and complete control over the environment. BMaaS is particularly beneficial for workloads that require intensive computation, low-latency networking, or compliance with specific hardware configurations.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Manager (IaaS)&lt;br /&gt;
|The Virtual Infrastructure Manager (VIM) component provides Infrastructure as a Service (IaaS) service. IaaS is a cloud computing model that provides virtualized computing resources. IaaS delivers essential services such as virtual machines, storage, and networks. Users can provision, scale, and manage the resources dynamically according to their needs, while the cloud provider takes care of maintaining the underlying hardware, networking, and security. This model offers high flexibility, enabling organizations to quickly deploy and run applications and services, test new solutions, and handle varying workloads with ease, ultimately driving innovation and operational efficiency. &lt;br /&gt;
|-&lt;br /&gt;
|Container Infrastructure Service Manager (CaaS)&lt;br /&gt;
|The Container Infrastructure Service Manager (CISM) component provides a Container as a Service (CaaS) service. CaaS is a cloud service model that provides a platform allowing users to manage and deploy containerized applications and workloads. By leveraging container orchestration tools such as Kubernetes, CaaS facilitates the automation of container deployment, scaling, and operations, ensuring high availability and performance. This model abstracts the underlying infrastructure complexities, enabling developers and IT teams to focus on application and service development and deployment without worrying about the maintenance of the physical or virtual infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Access Control&lt;br /&gt;
|As in the Cloud Edge Platform layer, this Access Control component implements virtual infrastructure management security, a role-based access control that ensures proper access rights and security for virtual resource management.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge sites and the configuration and availability of virtual resources in each one of them (for instance, number of k8s clusters available per site, CPU/memory available per k8s cluster, number of virtual CPUs that are available to setup new k8s clusters, ...) in order to help take decisions on workload placement. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Network Systems, SDN controllers&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Virtualization&lt;br /&gt;
|Compute&lt;br /&gt;
|Compute resources are fundamental to cloud infrastructure, delivering the computational power required for running applications and services. They facilitate scalable and efficient environments that dynamically adjust to varying workloads, thus enhancing resource utilization and performance while minimizing costs. &lt;br /&gt;
|-&lt;br /&gt;
|Storage&lt;br /&gt;
|Storage is essential in cloud infrastructure, providing data persistence, management, and accessibility. It includes block storage for databases, object storage for unstructured data, and file storage for shared access applications. Advanced technologies like SSDs and distributed file systems ensure scalability, reliability, and performance. &lt;br /&gt;
|-&lt;br /&gt;
|Networking&lt;br /&gt;
|Hardware networking resources in a cloud edge location include routers, switches, load balancers, and firewalls. These components form the backbone of data center connectivity and inter-server communication. Network Interface Cards (NICs) in servers enable high-throughput connections to the virtual network. WAN gateways and edge routers extend connectivity to external networks, supporting hybrid cloud and remote access scenarios. All hardware is managed centrally through SDN controllers and scaled dynamically to support edge cloud service demands. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Infrastructure Manager&lt;br /&gt;
|A Hardware Infrastructure Manager (also known as Data Center Infrastructure Management system, DCIM4) is a management component designed to monitor, measure, and manage the IT equipment and infrastructure within a cloud edge data center. It encompasses the following key aspects: Monitoring and Management: it provides real-time monitoring of data center operations, including power usage, cooling efficiency, and physical security. This helps in optimizing the performance and efficiency of the data center. Documentation and Planning: it maintains detailed documentation of the data center&#039;s physical and virtual assets. This includes layout planning, capacity management, and future expansion plans. Risk Management: By continuously monitoring environmental conditions and equipment status, it helps in identifying potential risks and mitigating them before they lead to failures. Integration with IT Systems: It integrates with other IT management systems to provide a holistic view of the data center&#039;s operations, facilitating better decision-making and resource allocation. Sustainability and Compliance: It supports sustainability goals by optimizing energy usage and ensuring compliance with industry standards and regulations. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge locations and the configuration and availability of physical hardware resources in each one of them (for instance, number of servers per location, type of servers, type of NIC cards available per location, cost of resources, energy consumption of resources, ...) in order to help take decisions on workload placement and resource lifecycle management. &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=240</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=240"/>
		<updated>2025-10-22T12:56:29Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Layers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|&#039;&#039;&#039;Application Layer&#039;&#039;&#039;&lt;br /&gt;
|Application Designer&lt;br /&gt;
|This component enables developers to design, create and customize applications using intuitive interfaces or predefined templates. It facilitates rapid development, integration and delivery of tailored applications using automated CI/CD practices (DevOps). The Application Designer facilitates the description of the application in terms of: set of application components it is made of and how they are connected (service function chain), runtime environment each of the application components will require, including the set of functions/services to support its execution, the attributes that may allow the selection of the computing node to host it (hardware requirements, latency, privacy, etc. ). &lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|The Application Packager supports the packaging of applications for its deployment in the Cloud-Edge continuum. It facilitates the automation of application deployment and update (DevOps, both traditional and AI-assisted), providing an integrated toolkit that enables quick, secure and innovative ways to deploy cloud-aware applications. It also provides tools for automatic verification and validation (CV/CT) of the application and its supply chain before its final packaging. &lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|This component provides the interface to invoke and use the applications contained in the catalog. It checks the identity and authenticates the user and checks his authorization to use the application before providing access to it. &lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|It tracks application usage and execution, monitors the performance and identifies abnormal behavior and suboptimal use of resources. &lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|It implements a directory of applications and functions that the providers have made available. They contain the characteristics of the application and the environment it requires for its execution (runtime, services, hardware characteristics). &lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|This component implements the accounting of application usage and provides online charging information for the customer to track application expenditure in real-time. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|&#039;&#039;&#039;Data Layer&#039;&#039;&#039;&lt;br /&gt;
|Data Pipelines&lt;br /&gt;
|This component provides the functionality for data collection, including the connectors to integrate with the data sources and the capabilities for data curation and pre-processing that ensure its quality and readiness for analytics, insight generation, training, modelling or inferencing phases. &lt;br /&gt;
|-&lt;br /&gt;
|Data Modelling&lt;br /&gt;
|This component enables data cataloguing to enable exposure and discovery at scale to easily search, find and browse data, over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Exposure&lt;br /&gt;
|The Data Exposure component provides customers with standard mechanisms and interfaces for safe and controlled access to data. It includes capabilities for making data offers and contract data acquisition, identity checking and data access authentication and authorization. &lt;br /&gt;
|-&lt;br /&gt;
|Data Policy Control&lt;br /&gt;
|Data Policy Control sets the required policies for data sharing, providing a safe, controlled and regulation-compliant environment for data exchange. It allows the data owner to manage the permissions to access its data: who can make it, at which conditions and for which purposes. &lt;br /&gt;
|-&lt;br /&gt;
|Data Catalog&lt;br /&gt;
|Data Catalog provides efficient storage and indexing of data to facilitate browsing, searching and finding data over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Federation&lt;br /&gt;
|Data Federation enables standard mechanisms and interfaces (connectors) for partnering in the provision of datasets, providing a unified view of data catalogs and databases from multiple data providers. This component enables real-time data exchange across companies using data mesh principles, connecting distributed and heterogeneous actors over the cloud-edge continuum, keeping data owners in full control of their data. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Data Federation capabilities should be designed consistently with the other federation capabilities described in this document.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|&#039;&#039;&#039;AI Layer&#039;&#039;&#039;&lt;br /&gt;
|Cloud-Edge Training&lt;br /&gt;
|This component facilitates the dynamic and adjustable training of AI models across cloud and edge environments, ensuring scalability, reduced latency, and optimized resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Inference&lt;br /&gt;
|The Inference components facilitate real-time deployment and execution of trained AI models on edge devices with efficient synchronization with the cloud for updates, monitoring, and enhancements. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Agent Manager&lt;br /&gt;
|The Cloud-Edge Agent Manager enables the deployment and management of agents and agentic workflows on edge and hybrid edge-cloud deployments creating an agentic mesh. &lt;br /&gt;
|-&lt;br /&gt;
|AI Model Catalog&lt;br /&gt;
|This component contains trained foundational models: LLMs, SLMs, multimodal LLMs, in multiple languages and managing multiple data types: text, images, video, code, etc. These models provide support for Natural Language Processing (NLP), Machine Translation (MT), speech processing, text analysis, information extraction, summarization or text and speech generation. They can be fine-tuned and adapted to specific use cases, using techniques like RAG, model quantization, pruning or distillation. The catalog contains multilingual and multimodal LLMs tailored to diverse EU languages, capable of understanding and processing diverse data types, including text, images, and multimedia. These models address the scarcity of generative AI solutions in non-English languages, ensuring semantic precision, completeness, and compliance with the AI Act. &lt;br /&gt;
|-&lt;br /&gt;
|Federated Learning&lt;br /&gt;
|AI workloads can be split across multiple nodes with central orchestration for scalability and efficiency (Distributed AI). AI Federation enables autonomous nodes to collaborate securely, ensuring privacy and sovereignty. Together, they balance task-sharing efficiency with autonomy. In distributed AI training, the AI model is generated at a central point based on the combination of models produced by different training agents distributed across a ecosystem of federated AI service providers or owners. The distributed training agents work locally on local datasets reducing the need to transfer data to a central location for training. This component allows to use and orchestrate AI resources across multiple providers to collaboratively perform a specific machine learning training task. It leverages a federated network of AI capabilities geographically distributed across the multi-provider Cloud Edge Continuum, enabling seamless resource sharing and scaling while maintaining sovereignty and compliance. It ensures efficient distribution of AI computational workloads, minimizes data movement, and facilitates parallel model training without requiring centralized data aggregation, thus preserving data privacy and autonomy while enhancing overall system performance. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Federated Learning capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|AI Explainability&lt;br /&gt;
|This Explainable AI component ensures transparency by providing interpretable insights into AI decision-making processes. It supports compliance, accountability, and trust by enabling users and regulators to understand, audit, and validate AI models while respecting privacy and data sovereignty. &lt;br /&gt;
|-&lt;br /&gt;
|AI Policy Control&lt;br /&gt;
|This component enables policy-based control over the AI Layer, including policies related to governance, security, and responsible AI practices, ensuring compliance and trustworthiness across the Cloud-Edge continuum.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;4&amp;quot;|&#039;&#039;&#039;Service Orchestration&#039;&#039;&#039;&lt;br /&gt;
|Service Orchestrator&lt;br /&gt;
|Service orchestration assures efficient tasks execution, load balancing and real-time operations. For example, it could communicate with the Multi-Cloud Orchestrator that manages the virtualized infrastructure layer offering a single unified environment for application development and monitoring. This allows applications and services to be deployed seamlessly across multiple platforms, optimizing resource allocation and reducing operational complexity. Alternatively, the Service Orchestrator may directly or indirectly interact with the underlying capabilities of the cloud platform or virtualization management layer to orchestrate workload execution.  The Service Orchestrator automates application and tenant deployment, and lifecycle management processes. By automating workflows (or service function chains), orchestration ensures that services communicate efficiently across the cloud-edge continuum. &lt;br /&gt;
|-&lt;br /&gt;
|Application Performance Management&lt;br /&gt;
|It monitors the performance and resource consumption of the application or service and communicates deviations from set thresholds or SLAs to the Service Orchestrator for this to take actions to recover a state that meets application requirements. It provides a unified view of states, including logging, monitoring, and alerting, for effective real-time application management and validation at runtime. &lt;br /&gt;
|-&lt;br /&gt;
|Application Repository&lt;br /&gt;
|This component tracks the applications and services that have been deployed and their configuration, the locations where the application and service components are installed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Service Federation&lt;br /&gt;
|This component interconnects the Service Orchestrator with those of other federated providers, enabling the deployment and execution of applications (service function chains) across multiple providers in a seamless way, interacting with a single provider. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Service Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;11&amp;quot;|&#039;&#039;&#039;Cloud Edge Platform&#039;&#039;&#039;&lt;br /&gt;
|Multi-Cloud Orchestrator (PaaS)&lt;br /&gt;
|Multi-Cloud Orchestrator delivers a Platform as a Service (PaaS) service. A PaaS provides a complete application development and deployment environment in the cloud. With PaaS, customers can build, test, deploy, manage, and update applications quickly and efficiently, without worrying about the underlying infrastructure. It receives from the Service Orchestrator a request to deploy (or manage the lifecycle of) a certain application together with a descriptor (resource model) that defines the state the application needs for its execution (including runtime environment, services, data, application image and other attributes like area of service, performance...). MCO processes the state and takes actions to set it up and preserve it, by updating, upgrading or removing workloads and services, or rescaling or releasing resources. MCO works in close relationship with other components (PIM, VIP, MCM, Serverless orchestrator) to provide the virtual runtime environment defined for the application, the specific combination of bare metal, virtual machine, containers, serverless mechanisms it has been developed to run on, using the technologies over which it has been tested and certified. MCO also deploys and manages the lifecycle of essential tools and services such as middleware, development frameworks, databases, and business analytics, enabling organizations to streamline application development and drive innovation. A PaaS, managed by the MCO, offers scalability, high availability, and reduced time-to-market, allowing developers to focus on coding and application functionality while the MCO supports with infrastructure, security, and operational aspects. Based on certain attributes, like area of service and performance, the MCO may select the location(s) where to deploy the workload and the resources (physical and virtual) required at those location(s) to meet the desired state. This decision on application placement can also follow sustainability and privacy requirements. The MCO deploys the workload once the necessary resources are available, using the Workload Deployment Manager. The MCO also updates and removes workloads, rescaling or releasing the corresponding resources. This MCO description shows a decomposition of the functionality of a cloud-edge continuum workload management solution that may be implemented in many ways, combining or excluding some of its components in order to fit specific sector needs. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Connectivity Manager&lt;br /&gt;
|The Cloud Edge Connectivity Manager (CEC) implements and modifies the service function chain, or removes it, totally or partially, following the requests from a Service Orchestrator, to guarantee the connectivity between workloads that will enable the service delivery and the connectivity from the service user to the workloads implementing the service front-end. Connectivity is usually based on overlay and underlay components in each domain crossed by the traffic (e.g. WAN, data centers, etc.). The CEC manages the networking in the data center domain through the virtualization managers (VIM, CISM) or via specific NaaS interfaces. It manages the WAN connectivity using Cloud Networking services (via transport SDN Controllers) for the connection of different computing nodes. In addition, the CEC manages the complexity deriving from the need to ensure consistency between overlay and underlay networking solutions (for example adapting the networking between the data center fabric and the AAN connectivity). &lt;br /&gt;
|-&lt;br /&gt;
|Physical Infrastructure Manager&lt;br /&gt;
|The Physical Infrastructure Manager (PIM) monitors and manages a pool of physical resources (CPUs, storage, networking), and selects and prepares them (with the corresponding OS and necessary software) to allocate these resources to a virtual machine or container cluster. The PIM provides multiple physical infrastructure management functions, including physical resource provisioning and lifecycle management, physical resource inventory management or physical resource performance management. &lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cluster Manager&lt;br /&gt;
|The Multi-Cluster Manager (MCM) creates and configures container clusters both over bare metal and over virtual machines after a request from the MCO, offering a single interface to manage infrastructure from multiple providers and with multiple K8s distributions. The MCM provides open connectors/APIs to interact with the resources and k8s distributions offered by different providers (private &amp;amp; public) for cluster creation, configuration and monitoring, and keeps track of their evolution. The MCM may create a K8s cluster on bare metal (cluster nodes are servers) or on virtualization stack (cluster nodes are VMs), interacting with PIM or VIP respectively. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Platform Manager&lt;br /&gt;
|The Virtual Infrastructure Platform Manager (VIP) creates virtual machine clusters across several locations using the resources allocated by the PIM. The VIP is required when the service component to be deployed is a Virtualized Application or a Containerized Application, which runs over container clusters that make use of VMs (virtual machines). This component works on infrastructure and technology from different providers, enabling the Cloud Edge continuum to run on a diverse set of different virtualization solutions (VIMs, CISMs or any other future virtualization technology). &lt;br /&gt;
|-&lt;br /&gt;
|Workload Deployment Manager&lt;br /&gt;
|The Workload Deployment Manager (WDM) deploys software package(s) on top of an existing cluster(s) following MCO requests. It exposes a single interface to deploy software packages (i.e. via a helm chart or resource model declaration) on any K8s cluster (or alike) based on any distribution. The WDM provides the connectors/APIs to interact with existing clusters in different locations &amp;amp; technologies (K8s distributions) for application deployment and lifecycle management. This component can also deploy software packages directly on virtual machines (IaaS). &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Federation&lt;br /&gt;
|This component interconnects the Multi-Cloud Orchestrator with the ones of other federated providers, enabling the customer the use of cloud edge computing services (IaaS, CaaS, PaaS, Serverless, NaaS...) across multiple providers in a seamless way, interacting with a single provider. This platform federation provides seamless integration and collaboration between multiple cloud platform providers, enabling interoperability, resource sharing, and unified lifecycle management. Shared resources may exist on all layers of cloud architecture. By adopting standardized protocols and interfaces, platform federation facilitates enhanced scalability, efficiency, and innovation across different cloud environments while maintaining autonomy and security for each participating entity. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Cloud Edge Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Access Control&lt;br /&gt;
|This component implements a key aspect in terms of security in the management of cloud edge infrastructure, a role-based access control that ensures proper access rights and security across the infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Resource Repository&lt;br /&gt;
|This component keeps a record of the resources available in each of the edge locations, the virtualization platforms available and the configuration. The information in this repository helps the multi-cloud orchestrator to select the right location(s) to deploy workloads. &lt;br /&gt;
|-&lt;br /&gt;
|Workload Inventory&lt;br /&gt;
|This component keeps record of the workloads that have been deployed and their configuration, as well as information about the location and cluster where they have been deployed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Serverless Orchestrator (FaaS)&lt;br /&gt;
|The Serverless Orchestrator provides Serverless capabilities, also known as Function as a Service (FaaS), is a cloud computing model that allows developers to build and deploy applications in the form of individual functions, which are executed in response to specific events or triggers. This model eliminates the need to manage server infrastructure, enabling developers to focus solely on writing code. Each function runs in a stateless container, automatically scaling with demand and only consuming resources when invoked, leading to cost savings and efficient resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;11&amp;quot;|&#039;&#039;&#039;Virtualization &amp;amp; Infrastructure&#039;&#039;&#039;&lt;br /&gt;
|Hardware Resource Manager (BMaaS)&lt;br /&gt;
|The Hardware Resource Manager component delivers a Bare Metal as a Service (BMaaS) service. BMaaS is an abstraction that provides physical, non-virtualized hardware resources directly to users, offering dedicated servers, storage, and networking components without any virtualization layer. This service allows to harness the full power of the hardware for applications, resulting in higher performance, predictable latency, and complete control over the environment. BMaaS is particularly beneficial for workloads that require intensive computation, low-latency networking, or compliance with specific hardware configurations.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Manager (IaaS)&lt;br /&gt;
|The Virtual Infrastructure Manager (VIM) component provides Infrastructure as a Service (IaaS) service. IaaS is a cloud computing model that provides virtualized computing resources. IaaS delivers essential services such as virtual machines, storage, and networks. Users can provision, scale, and manage the resources dynamically according to their needs, while the cloud provider takes care of maintaining the underlying hardware, networking, and security. This model offers high flexibility, enabling organizations to quickly deploy and run applications and services, test new solutions, and handle varying workloads with ease, ultimately driving innovation and operational efficiency. &lt;br /&gt;
|-&lt;br /&gt;
|Container Infrastructure Service Manager (CaaS)&lt;br /&gt;
|The Container Infrastructure Service Manager (CISM) component provides a Container as a Service (CaaS) service. CaaS is a cloud service model that provides a platform allowing users to manage and deploy containerized applications and workloads. By leveraging container orchestration tools such as Kubernetes, CaaS facilitates the automation of container deployment, scaling, and operations, ensuring high availability and performance. This model abstracts the underlying infrastructure complexities, enabling developers and IT teams to focus on application and service development and deployment without worrying about the maintenance of the physical or virtual infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Access Control&lt;br /&gt;
|As in the Cloud Edge Platform layer, this Access Control component implements virtual infrastructure management security, a role-based access control that ensures proper access rights and security for virtual resource management.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge sites and the configuration and availability of virtual resources in each one of them (for instance, number of k8s clusters available per site, CPU/memory available per k8s cluster, number of virtual CPUs that are available to setup new k8s clusters, ...) in order to help take decisions on workload placement. &lt;br /&gt;
|-&lt;br /&gt;
|Network Systems, SDN controllers&lt;br /&gt;
|This is a placeholder for the underlying network management systems and SDN controllers that manage both physical and virtual network connectivity. (Original content: `colspan=&amp;quot;2&amp;quot;|Network Systems, SDN controllers`)&lt;br /&gt;
|-&lt;br /&gt;
|Compute&lt;br /&gt;
|Compute resources are fundamental to cloud infrastructure, delivering the computational power required for running applications and services. They facilitate scalable and efficient environments that dynamically adjust to varying workloads, thus enhancing resource utilization and performance while minimizing costs. &lt;br /&gt;
|-&lt;br /&gt;
|Storage&lt;br /&gt;
|Storage is essential in cloud infrastructure, providing data persistence, management, and accessibility. It includes block storage for databases, object storage for unstructured data, and file storage for shared access applications. Advanced technologies like SSDs and distributed file systems ensure scalability, reliability, and performance. &lt;br /&gt;
|-&lt;br /&gt;
|Networking&lt;br /&gt;
|Hardware networking resources in a cloud edge location include routers, switches, load balancers, and firewalls. These components form the backbone of data center connectivity and inter-server communication. Network Interface Cards (NICs) in servers enable high-throughput connections to the virtual network. WAN gateways and edge routers extend connectivity to external networks, supporting hybrid cloud and remote access scenarios. All hardware is managed centrally through SDN controllers and scaled dynamically to support edge cloud service demands. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Infrastructure Manager&lt;br /&gt;
|A Hardware Infrastructure Manager (also known as Data Center Infrastructure Management system, DCIM4) is a management component designed to monitor, measure, and manage the IT equipment and infrastructure within a cloud edge data center. It encompasses the following key aspects: Monitoring and Management: it provides real-time monitoring of data center operations, including power usage, cooling efficiency, and physical security. This helps in optimizing the performance and efficiency of the data center. Documentation and Planning: it maintains detailed documentation of the data center&#039;s physical and virtual assets. This includes layout planning, capacity management, and future expansion plans. Risk Management: By continuously monitoring environmental conditions and equipment status, it helps in identifying potential risks and mitigating them before they lead to failures. Integration with IT Systems: It integrates with other IT management systems to provide a holistic view of the data center&#039;s operations, facilitating better decision-making and resource allocation. Sustainability and Compliance: It supports sustainability goals by optimizing energy usage and ensuring compliance with industry standards and regulations. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge locations and the configuration and availability of physical hardware resources in each one of them (for instance, number of servers per location, type of servers, type of NIC cards available per location, cost of resources, energy consumption of resources, ...) in order to help take decisions on workload placement and resource lifecycle management. &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=239</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=239"/>
		<updated>2025-10-22T12:54:50Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Layers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Application Layer&lt;br /&gt;
|-&lt;br /&gt;
|Application Designer&lt;br /&gt;
|This component enables developers to design, create and customize applications using intuitive interfaces or predefined templates. It facilitates rapid development, integration and delivery of tailored applications using automated CI/CD practices (DevOps). The Application Designer facilitates the description of the application in terms of: set of application components it is made of and how they are connected (service function chain), runtime environment each of the application components will require, including the set of functions/services to support its execution, the attributes that may allow the selection of the computing node to host it (hardware requirements, latency, privacy, etc. ). &lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|The Application Packager supports the packaging of applications for its deployment in the Cloud-Edge continuum. It facilitates the automation of application deployment and update (DevOps, both traditional and AI-assisted), providing an integrated toolkit that enables quick, secure and innovative ways to deploy cloud-aware applications. It also provides tools for automatic verification and validation (CV/CT) of the application and its supply chain before its final packaging. &lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|This component provides the interface to invoke and use the applications contained in the catalog. It checks the identity and authenticates the user and checks his authorization to use the application before providing access to it. &lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|It tracks application usage and execution, monitors the performance and identifies abnormal behavior and suboptimal use of resources. &lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|It implements a directory of applications and functions that the providers have made available. They contain the characteristics of the application and the environment it requires for its execution (runtime, services, hardware characteristics). &lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|This component implements the accounting of application usage and provides online charging information for the customer to track application expenditure in real-time. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Data Layer&lt;br /&gt;
|-&lt;br /&gt;
|Data Pipelines&lt;br /&gt;
|This component provides the functionality for data collection, including the connectors to integrate with the data sources and the capabilities for data curation and pre-processing that ensure its quality and readiness for analytics, insight generation, training, modelling or inferencing phases. &lt;br /&gt;
|-&lt;br /&gt;
|Data Modelling&lt;br /&gt;
|This component enables data cataloguing to enable exposure and discovery at scale to easily search, find and browse data, over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Exposure&lt;br /&gt;
|The Data Exposure component provides customers with standard mechanisms and interfaces for safe and controlled access to data. It includes capabilities for making data offers and contract data acquisition, identity checking and data access authentication and authorization. &lt;br /&gt;
|-&lt;br /&gt;
|Data Policy Control&lt;br /&gt;
|Data Policy Control sets the required policies for data sharing, providing a safe, controlled and regulation-compliant environment for data exchange. It allows the data owner to manage the permissions to access its data: who can make it, at which conditions and for which purposes. &lt;br /&gt;
|-&lt;br /&gt;
|Data Catalog&lt;br /&gt;
|Data Catalog provides efficient storage and indexing of data to facilitate browsing, searching and finding data over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Federation&lt;br /&gt;
|Data Federation enables standard mechanisms and interfaces (connectors) for partnering in the provision of datasets, providing a unified view of data catalogs and databases from multiple data providers. This component enables real-time data exchange across companies using data mesh principles, connecting distributed and heterogeneous actors over the cloud-edge continuum, keeping data owners in full control of their data. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Data Federation capabilities should be designed consistently with the other federation capabilities described in this document.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|AI Layer&lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Training&lt;br /&gt;
|This component facilitates the dynamic and adjustable training of AI models across cloud and edge environments, ensuring scalability, reduced latency, and optimized resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Inference&lt;br /&gt;
|The Inference components facilitate real-time deployment and execution of trained AI models on edge devices with efficient synchronization with the cloud for updates, monitoring, and enhancements. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Agent Manager&lt;br /&gt;
|The Cloud-Edge Agent Manager enables the deployment and management of agents and agentic workflows on edge and hybrid edge-cloud deployments creating an agentic mesh. &lt;br /&gt;
|-&lt;br /&gt;
|AI Model Catalog&lt;br /&gt;
|This component contains trained foundational models: LLMs, SLMs, multimodal LLMs, in multiple languages and managing multiple data types: text, images, video, code, etc. These models provide support for Natural Language Processing (NLP), Machine Translation (MT), speech processing, text analysis, information extraction, summarization or text and speech generation. They can be fine-tuned and adapted to specific use cases, using techniques like RAG, model quantization, pruning or distillation. The catalog contains multilingual and multimodal LLMs tailored to diverse EU languages, capable of understanding and processing diverse data types, including text, images, and multimedia. These models address the scarcity of generative AI solutions in non-English languages, ensuring semantic precision, completeness, and compliance with the AI Act. &lt;br /&gt;
|-&lt;br /&gt;
|Federated Learning&lt;br /&gt;
|AI workloads can be split across multiple nodes with central orchestration for scalability and efficiency (Distributed AI). AI Federation enables autonomous nodes to collaborate securely, ensuring privacy and sovereignty. Together, they balance task-sharing efficiency with autonomy. In distributed AI training, the AI model is generated at a central point based on the combination of models produced by different training agents distributed across a ecosystem of federated AI service providers or owners. The distributed training agents work locally on local datasets reducing the need to transfer data to a central location for training. This component allows to use and orchestrate AI resources across multiple providers to collaboratively perform a specific machine learning training task. It leverages a federated network of AI capabilities geographically distributed across the multi-provider Cloud Edge Continuum, enabling seamless resource sharing and scaling while maintaining sovereignty and compliance. It ensures efficient distribution of AI computational workloads, minimizes data movement, and facilitates parallel model training without requiring centralized data aggregation, thus preserving data privacy and autonomy while enhancing overall system performance. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Federated Learning capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|AI Explainability&lt;br /&gt;
|This Explainable AI component ensures transparency by providing interpretable insights into AI decision-making processes. It supports compliance, accountability, and trust by enabling users and regulators to understand, audit, and validate AI models while respecting privacy and data sovereignty. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Service Orchestration&lt;br /&gt;
|-&lt;br /&gt;
|Service Orchestrator&lt;br /&gt;
|Service orchestration assures efficient tasks execution, load balancing and real-time operations. For example, it could communicate with the Multi-Cloud Orchestrator that manages the virtualized infrastructure layer offering a single unified environment for application development and monitoring. This allows applications and services to be deployed seamlessly across multiple platforms, optimizing resource allocation and reducing operational complexity. Alternatively, the Service Orchestrator may directly or indirectly interact with the underlying capabilities of the cloud platform or virtualization management layer to orchestrate workload execution.  The Service Orchestrator automates application and tenant deployment, and lifecycle management processes. By automating workflows (or service function chains), orchestration ensures that services communicate efficiently across the cloud-edge continuum. &lt;br /&gt;
|-&lt;br /&gt;
|Application Performance Management&lt;br /&gt;
|It monitors the performance and resource consumption of the application or service and communicates deviations from set thresholds or SLAs to the Service Orchestrator for this to take actions to recover a state that meets application requirements. It provides a unified view of states, including logging, monitoring, and alerting, for effective real-time application management and validation at runtime. &lt;br /&gt;
|-&lt;br /&gt;
|Application Repository&lt;br /&gt;
|This component tracks the applications and services that have been deployed and their configuration, the locations where the application and service components are installed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Service Federation&lt;br /&gt;
|This component interconnects the Service Orchestrator with those of other federated providers, enabling the deployment and execution of applications (service function chains) across multiple providers in a seamless way, interacting with a single provider. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Service Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;12&amp;quot;|Cloud Edge Platform&lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cloud Orchestrator (PaaS)&lt;br /&gt;
|Multi-Cloud Orchestrator delivers a Platform as a Service (PaaS) service. A PaaS provides a complete application development and deployment environment in the cloud. With PaaS, customers can build, test, deploy, manage, and update applications quickly and efficiently, without worrying about the underlying infrastructure. It receives from the Service Orchestrator a request to deploy (or manage the lifecycle of) a certain application together with a descriptor (resource model) that defines the state the application needs for its execution (including runtime environment, services, data, application image and other attributes like area of service, performance...). MCO processes the state and takes actions to set it up and preserve it, by updating, upgrading or removing workloads and services, or rescaling or releasing resources. MCO works in close relationship with other components (PIM, VIP, MCM, Serverless orchestrator) to provide the virtual runtime environment defined for the application, the specific combination of bare metal, virtual machine, containers, serverless mechanisms it has been developed to run on, using the technologies over which it has been tested and certified. MCO also deploys and manages the lifecycle of essential tools and services such as middleware, development frameworks, databases, and business analytics, enabling organizations to streamline application development and drive innovation. A PaaS, managed by the MCO, offers scalability, high availability, and reduced time-to-market, allowing developers to focus on coding and application functionality while the MCO supports with infrastructure, security, and operational aspects. Based on certain attributes, like area of service and performance, the MCO may select the location(s) where to deploy the workload and the resources (physical and virtual) required at those location(s) to meet the desired state. This decision on application placement can also follow sustainability and privacy requirements. The MCO deploys the workload once the necessary resources are available, using the Workload Deployment Manager. The MCO also updates and removes workloads, rescaling or releasing the corresponding resources. This MCO description shows a decomposition of the functionality of a cloud-edge continuum workload management solution that may be implemented in many ways, combining or excluding some of its components in order to fit specific sector needs. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Connectivity Manager&lt;br /&gt;
|The Cloud Edge Connectivity Manager (CEC) implements and modifies the service function chain, or removes it, totally or partially, following the requests from a Service Orchestrator, to guarantee the connectivity between workloads that will enable the service delivery and the connectivity from the service user to the workloads implementing the service front-end. Connectivity is usually based on overlay and underlay components in each domain crossed by the traffic (e.g. WAN, data centers, etc.). The CEC manages the networking in the data center domain through the virtualization managers (VIM, CISM) or via specific NaaS interfaces. It manages the WAN connectivity using Cloud Networking services (via transport SDN Controllers) for the connection of different computing nodes. In addition, the CEC manages the complexity deriving from the need to ensure consistency between overlay and underlay networking solutions (for example adapting the networking between the data center fabric and the AAN connectivity). &lt;br /&gt;
|-&lt;br /&gt;
|Physical Infrastructure Manager&lt;br /&gt;
|The Physical Infrastructure Manager (PIM) monitors and manages a pool of physical resources (CPUs, storage, networking), and selects and prepares them (with the corresponding OS and necessary software) to allocate these resources to a virtual machine or container cluster. The PIM provides multiple physical infrastructure management functions, including physical resource provisioning and lifecycle management, physical resource inventory management or physical resource performance management. &lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cluster Manager&lt;br /&gt;
|The Multi-Cluster Manager (MCM) creates and configures container clusters both over bare metal and over virtual machines after a request from the MCO, offering a single interface to manage infrastructure from multiple providers and with multiple K8s distributions. The MCM provides open connectors/APIs to interact with the resources and k8s distributions offered by different providers (private &amp;amp; public) for cluster creation, configuration and monitoring, and keeps track of their evolution. The MCM may create a K8s cluster on bare metal (cluster nodes are servers) or on virtualization stack (cluster nodes are VMs), interacting with PIM or VIP respectively. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Platform Manager&lt;br /&gt;
|The Virtual Infrastructure Platform Manager (VIP) creates virtual machine clusters across several locations using the resources allocated by the PIM. The VIP is required when the service component to be deployed is a Virtualized Application or a Containerized Application, which runs over container clusters that make use of VMs (virtual machines). This component works on infrastructure and technology from different providers, enabling the Cloud Edge continuum to run on a diverse set of different virtualization solutions (VIMs, CISMs or any other future virtualization technology). &lt;br /&gt;
|-&lt;br /&gt;
|Workload Deployment Manager&lt;br /&gt;
|The Workload Deployment Manager (WDM) deploys software package(s) on top of an existing cluster(s) following MCO requests. It exposes a single interface to deploy software packages (i.e. via a helm chart or resource model declaration) on any K8s cluster (or alike) based on any distribution. The WDM provides the connectors/APIs to interact with existing clusters in different locations &amp;amp; technologies (K8s distributions) for application deployment and lifecycle management. This component can also deploy software packages directly on virtual machines (IaaS). &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Federation&lt;br /&gt;
|This component interconnects the Multi-Cloud Orchestrator with the ones of other federated providers, enabling the customer the use of cloud edge computing services (IaaS, CaaS, PaaS, Serverless, NaaS...) across multiple providers in a seamless way, interacting with a single provider. This platform federation provides seamless integration and collaboration between multiple cloud platform providers, enabling interoperability, resource sharing, and unified lifecycle management. Shared resources may exist on all layers of cloud architecture. By adopting standardized protocols and interfaces, platform federation facilitates enhanced scalability, efficiency, and innovation across different cloud environments while maintaining autonomy and security for each participating entity. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Cloud Edge Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Access Control&lt;br /&gt;
|This component implements a key aspect in terms of security in the management of cloud edge infrastructure, a role-based access control that ensures proper access rights and security across the infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Resource Repository&lt;br /&gt;
|This component keeps a record of the resources available in each of the edge locations, the virtualization platforms available and the configuration. The information in this repository helps the multi-cloud orchestrator to select the right location(s) to deploy workloads. &lt;br /&gt;
|-&lt;br /&gt;
|Workload Inventory&lt;br /&gt;
|This component keeps record of the workloads that have been deployed and their configuration, as well as information about the location and cluster where they have been deployed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Serverless Orchestrator (FaaS)&lt;br /&gt;
|The Serverless Orchestrator provides Serverless capabilities, also known as Function as a Service (FaaS), is a cloud computing model that allows developers to build and deploy applications in the form of individual functions, which are executed in response to specific events or triggers. This model eliminates the need to manage server infrastructure, enabling developers to focus solely on writing code. Each function runs in a stateless container, automatically scaling with demand and only consuming resources when invoked, leading to cost savings and efficient resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;12&amp;quot;|Virtualization&lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Manager (BMaaS)&lt;br /&gt;
|The Hardware Resource Manager component delivers a Bare Metal as a Service (BMaaS) service. BMaaS is an abstraction that provides physical, non-virtualized hardware resources directly to users, offering dedicated servers, storage, and networking components without any virtualization layer. This service allows to harness the full power of the hardware for applications, resulting in higher performance, predictable latency, and complete control over the environment. BMaaS is particularly beneficial for workloads that require intensive computation, low-latency networking, or compliance with specific hardware configurations.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Manager (IaaS)&lt;br /&gt;
|The Virtual Infrastructure Manager (VIM) component provides Infrastructure as a Service (IaaS) service. IaaS is a cloud computing model that provides virtualized computing resources. IaaS delivers essential services such as virtual machines, storage, and networks. Users can provision, scale, and manage the resources dynamically according to their needs, while the cloud provider takes care of maintaining the underlying hardware, networking, and security. This model offers high flexibility, enabling organizations to quickly deploy and run applications and services, test new solutions, and handle varying workloads with ease, ultimately driving innovation and operational efficiency. &lt;br /&gt;
|-&lt;br /&gt;
|Container Infrastructure Service Manager (CaaS)&lt;br /&gt;
|The Container Infrastructure Service Manager (CISM) component provides a Container as a Service (CaaS) service. CaaS is a cloud service model that provides a platform allowing users to manage and deploy containerized applications and workloads. By leveraging container orchestration tools such as Kubernetes, CaaS facilitates the automation of container deployment, scaling, and operations, ensuring high availability and performance. This model abstracts the underlying infrastructure complexities, enabling developers and IT teams to focus on application and service development and deployment without worrying about the maintenance of the physical or virtual infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Access Control&lt;br /&gt;
|As in the Cloud Edge Platform layer, this Access Control component implements virtual infrastructure management security, a role-based access control that ensures proper access rights and security for virtual resource management.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge sites and the configuration and availability of virtual resources in each one of them (for instance, number of k8s clusters available per site, CPU/memory available per k8s cluster, number of virtual CPUs that are available to setup new k8s clusters, ...) in order to help take decisions on workload placement. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Virtualization&lt;br /&gt;
|Compute&lt;br /&gt;
|Compute resources are fundamental to cloud infrastructure, delivering the computational power required for running applications and services. They facilitate scalable and efficient environments that dynamically adjust to varying workloads, thus enhancing resource utilization and performance while minimizing costs. &lt;br /&gt;
|-&lt;br /&gt;
|Storage&lt;br /&gt;
|Storage is essential in cloud infrastructure, providing data persistence, management, and accessibility. It includes block storage for databases, object storage for unstructured data, and file storage for shared access applications. Advanced technologies like SSDs and distributed file systems ensure scalability, reliability, and performance. &lt;br /&gt;
|-&lt;br /&gt;
|Networking&lt;br /&gt;
|Hardware networking resources in a cloud edge location include routers, switches, load balancers, and firewalls. These components form the backbone of data center connectivity and inter-server communication. Network Interface Cards (NICs) in servers enable high-throughput connections to the virtual network. WAN gateways and edge routers extend connectivity to external networks, supporting hybrid cloud and remote access scenarios. All hardware is managed centrally through SDN controllers and scaled dynamically to support edge cloud service demands. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Infrastructure Manager&lt;br /&gt;
|A Hardware Infrastructure Manager (also known as Data Center Infrastructure Management system, DCIM4) is a management component designed to monitor, measure, and manage the IT equipment and infrastructure within a cloud edge data center. It encompasses the following key aspects: Monitoring and Management: it provides real-time monitoring of data center operations, including power usage, cooling efficiency, and physical security. This helps in optimizing the performance and efficiency of the data center. Documentation and Planning: it maintains detailed documentation of the data center&#039;s physical and virtual assets. This includes layout planning, capacity management, and future expansion plans. Risk Management: By continuously monitoring environmental conditions and equipment status, it helps in identifying potential risks and mitigating them before they lead to failures. Integration with IT Systems: It integrates with other IT management systems to provide a holistic view of the data center&#039;s operations, facilitating better decision-making and resource allocation. Sustainability and Compliance: It supports sustainability goals by optimizing energy usage and ensuring compliance with industry standards and regulations. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge locations and the configuration and availability of physical hardware resources in each one of them (for instance, number of servers per location, type of servers, type of NIC cards available per location, cost of resources, energy consumption of resources, ...) in order to help take decisions on workload placement and resource lifecycle management. &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=238</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=238"/>
		<updated>2025-10-22T12:53:42Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Layers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Application Layer&lt;br /&gt;
|-&lt;br /&gt;
|Application Designer&lt;br /&gt;
|This component enables developers to design, create and customize applications using intuitive interfaces or predefined templates. It facilitates rapid development, integration and delivery of tailored applications using automated CI/CD practices (DevOps). The Application Designer facilitates the description of the application in terms of: set of application components it is made of and how they are connected (service function chain), runtime environment each of the application components will require, including the set of functions/services to support its execution, the attributes that may allow the selection of the computing node to host it (hardware requirements, latency, privacy, etc. ). &lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|The Application Packager supports the packaging of applications for its deployment in the Cloud-Edge continuum. It facilitates the automation of application deployment and update (DevOps, both traditional and AI-assisted), providing an integrated toolkit that enables quick, secure and innovative ways to deploy cloud-aware applications. It also provides tools for automatic verification and validation (CV/CT) of the application and its supply chain before its final packaging. &lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|This component provides the interface to invoke and use the applications contained in the catalog. It checks the identity and authenticates the user and checks his authorization to use the application before providing access to it. &lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|It tracks application usage and execution, monitors the performance and identifies abnormal behavior and suboptimal use of resources. &lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|It implements a directory of applications and functions that the providers have made available. They contain the characteristics of the application and the environment it requires for its execution (runtime, services, hardware characteristics). &lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|This component implements the accounting of application usage and provides online charging information for the customer to track application expenditure in real-time. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Data Layer&lt;br /&gt;
|-&lt;br /&gt;
|Data Pipelines&lt;br /&gt;
|This component provides the functionality for data collection, including the connectors to integrate with the data sources and the capabilities for data curation and pre-processing that ensure its quality and readiness for analytics, insight generation, training, modelling or inferencing phases. &lt;br /&gt;
|-&lt;br /&gt;
|Data Modelling&lt;br /&gt;
|This component enables data cataloguing to enable exposure and discovery at scale to easily search, find and browse data, over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Exposure&lt;br /&gt;
|The Data Exposure component provides customers with standard mechanisms and interfaces for safe and controlled access to data. It includes capabilities for making data offers and contract data acquisition, identity checking and data access authentication and authorization. &lt;br /&gt;
|-&lt;br /&gt;
|Data Policy Control&lt;br /&gt;
|Data Policy Control sets the required policies for data sharing, providing a safe, controlled and regulation-compliant environment for data exchange. It allows the data owner to manage the permissions to access its data: who can make it, at which conditions and for which purposes. &lt;br /&gt;
|-&lt;br /&gt;
|Data Catalog&lt;br /&gt;
|Data Catalog provides efficient storage and indexing of data to facilitate browsing, searching and finding data over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Federation&lt;br /&gt;
|Data Federation enables standard mechanisms and interfaces (connectors) for partnering in the provision of datasets, providing a unified view of data catalogs and databases from multiple data providers. This component enables real-time data exchange across companies using data mesh principles, connecting distributed and heterogeneous actors over the cloud-edge continuum, keeping data owners in full control of their data. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Data Federation capabilities should be designed consistently with the other federation capabilities described in this document.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|AI Layer&lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Training&lt;br /&gt;
|This component facilitates the dynamic and adjustable training of AI models across cloud and edge environments, ensuring scalability, reduced latency, and optimized resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Inference&lt;br /&gt;
|The Inference components facilitate real-time deployment and execution of trained AI models on edge devices with efficient synchronization with the cloud for updates, monitoring, and enhancements. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Agent Manager&lt;br /&gt;
|The Cloud-Edge Agent Manager enables the deployment and management of agents and agentic workflows on edge and hybrid edge-cloud deployments creating an agentic mesh. &lt;br /&gt;
|-&lt;br /&gt;
|AI Model Catalog&lt;br /&gt;
|This component contains trained foundational models: LLMs, SLMs, multimodal LLMs, in multiple languages and managing multiple data types: text, images, video, code, etc. These models provide support for Natural Language Processing (NLP), Machine Translation (MT), speech processing, text analysis, information extraction, summarization or text and speech generation. They can be fine-tuned and adapted to specific use cases, using techniques like RAG, model quantization, pruning or distillation. The catalog contains multilingual and multimodal LLMs tailored to diverse EU languages, capable of understanding and processing diverse data types, including text, images, and multimedia. These models address the scarcity of generative AI solutions in non-English languages, ensuring semantic precision, completeness, and compliance with the AI Act. &lt;br /&gt;
|-&lt;br /&gt;
|Federated Learning&lt;br /&gt;
|AI workloads can be split across multiple nodes with central orchestration for scalability and efficiency (Distributed AI). AI Federation enables autonomous nodes to collaborate securely, ensuring privacy and sovereignty. Together, they balance task-sharing efficiency with autonomy. In distributed AI training, the AI model is generated at a central point based on the combination of models produced by different training agents distributed across a ecosystem of federated AI service providers or owners. The distributed training agents work locally on local datasets reducing the need to transfer data to a central location for training. This component allows to use and orchestrate AI resources across multiple providers to collaboratively perform a specific machine learning training task. It leverages a federated network of AI capabilities geographically distributed across the multi-provider Cloud Edge Continuum, enabling seamless resource sharing and scaling while maintaining sovereignty and compliance. It ensures efficient distribution of AI computational workloads, minimizes data movement, and facilitates parallel model training without requiring centralized data aggregation, thus preserving data privacy and autonomy while enhancing overall system performance. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Federated Learning capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|AI Explainability&lt;br /&gt;
|This Explainable AI component ensures transparency by providing interpretable insights into AI decision-making processes. It supports compliance, accountability, and trust by enabling users and regulators to understand, audit, and validate AI models while respecting privacy and data sovereignty. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Service Orchestration&lt;br /&gt;
|-&lt;br /&gt;
|Service Orchestrator&lt;br /&gt;
|Service orchestration assures efficient tasks execution, load balancing and real-time operations. For example, it could communicate with the Multi-Cloud Orchestrator that manages the virtualized infrastructure layer offering a single unified environment for application development and monitoring. This allows applications and services to be deployed seamlessly across multiple platforms, optimizing resource allocation and reducing operational complexity. Alternatively, the Service Orchestrator may directly or indirectly interact with the underlying capabilities of the cloud platform or virtualization management layer to orchestrate workload execution.  The Service Orchestrator automates application and tenant deployment, and lifecycle management processes. By automating workflows (or service function chains), orchestration ensures that services communicate efficiently across the cloud-edge continuum. &lt;br /&gt;
|-&lt;br /&gt;
|Application Performance Management&lt;br /&gt;
|It monitors the performance and resource consumption of the application or service and communicates deviations from set thresholds or SLAs to the Service Orchestrator for this to take actions to recover a state that meets application requirements. It provides a unified view of states, including logging, monitoring, and alerting, for effective real-time application management and validation at runtime. &lt;br /&gt;
|-&lt;br /&gt;
|Application Repository&lt;br /&gt;
|This component tracks the applications and services that have been deployed and their configuration, the locations where the application and service components are installed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Service Federation&lt;br /&gt;
|This component interconnects the Service Orchestrator with those of other federated providers, enabling the deployment and execution of applications (service function chains) across multiple providers in a seamless way, interacting with a single provider. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Service Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;12&amp;quot;|Cloud Edge Platform&lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cloud Orchestrator (PaaS)&lt;br /&gt;
|Multi-Cloud Orchestrator delivers a Platform as a Service (PaaS) service. A PaaS provides a complete application development and deployment environment in the cloud. With PaaS, customers can build, test, deploy, manage, and update applications quickly and efficiently, without worrying about the underlying infrastructure. It receives from the Service Orchestrator a request to deploy (or manage the lifecycle of) a certain application together with a descriptor (resource model) that defines the state the application needs for its execution (including runtime environment, services, data, application image and other attributes like area of service, performance...). MCO processes the state and takes actions to set it up and preserve it, by updating, upgrading or removing workloads and services, or rescaling or releasing resources. MCO works in close relationship with other components (PIM, VIP, MCM, Serverless orchestrator) to provide the virtual runtime environment defined for the application, the specific combination of bare metal, virtual machine, containers, serverless mechanisms it has been developed to run on, using the technologies over which it has been tested and certified. MCO also deploys and manages the lifecycle of essential tools and services such as middleware, development frameworks, databases, and business analytics, enabling organizations to streamline application development and drive innovation. A PaaS, managed by the MCO, offers scalability, high availability, and reduced time-to-market, allowing developers to focus on coding and application functionality while the MCO supports with infrastructure, security, and operational aspects. Based on certain attributes, like area of service and performance, the MCO may select the location(s) where to deploy the workload and the resources (physical and virtual) required at those location(s) to meet the desired state. This decision on application placement can also follow sustainability and privacy requirements. The MCO deploys the workload once the necessary resources are available, using the Workload Deployment Manager. The MCO also updates and removes workloads, rescaling or releasing the corresponding resources. This MCO description shows a decomposition of the functionality of a cloud-edge continuum workload management solution that may be implemented in many ways, combining or excluding some of its components in order to fit specific sector needs. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Connectivity Manager&lt;br /&gt;
|The Cloud Edge Connectivity Manager (CEC) implements and modifies the service function chain, or removes it, totally or partially, following the requests from a Service Orchestrator, to guarantee the connectivity between workloads that will enable the service delivery and the connectivity from the service user to the workloads implementing the service front-end. Connectivity is usually based on overlay and underlay components in each domain crossed by the traffic (e.g. WAN, data centers, etc.). The CEC manages the networking in the data center domain through the virtualization managers (VIM, CISM) or via specific NaaS interfaces. It manages the WAN connectivity using Cloud Networking services (via transport SDN Controllers) for the connection of different computing nodes. In addition, the CEC manages the complexity deriving from the need to ensure consistency between overlay and underlay networking solutions (for example adapting the networking between the data center fabric and the AAN connectivity). &lt;br /&gt;
|-&lt;br /&gt;
|Physical Infrastructure Manager&lt;br /&gt;
|The Physical Infrastructure Manager (PIM) monitors and manages a pool of physical resources (CPUs, storage, networking), and selects and prepares them (with the corresponding OS and necessary software) to allocate these resources to a virtual machine or container cluster. The PIM provides multiple physical infrastructure management functions, including physical resource provisioning and lifecycle management, physical resource inventory management or physical resource performance management. &lt;br /&gt;
|-&lt;br /&gt;
|Multi-Cluster Manager&lt;br /&gt;
|The Multi-Cluster Manager (MCM) creates and configures container clusters both over bare metal and over virtual machines after a request from the MCO, offering a single interface to manage infrastructure from multiple providers and with multiple K8s distributions. The MCM provides open connectors/APIs to interact with the resources and k8s distributions offered by different providers (private &amp;amp; public) for cluster creation, configuration and monitoring, and keeps track of their evolution. The MCM may create a K8s cluster on bare metal (cluster nodes are servers) or on virtualization stack (cluster nodes are VMs), interacting with PIM or VIP respectively. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Platform Manager&lt;br /&gt;
|The Virtual Infrastructure Platform Manager (VIP) creates virtual machine clusters across several locations using the resources allocated by the PIM. The VIP is required when the service component to be deployed is a Virtualized Application or a Containerized Application, which runs over container clusters that make use of VMs (virtual machines). This component works on infrastructure and technology from different providers, enabling the Cloud Edge continuum to run on a diverse set of different virtualization solutions (VIMs, CISMs or any other future virtualization technology). &lt;br /&gt;
|-&lt;br /&gt;
|Workload Deployment Manager&lt;br /&gt;
|The Workload Deployment Manager (WDM) deploys software package(s) on top of an existing cluster(s) following MCO requests. It exposes a single interface to deploy software packages (i.e. via a helm chart or resource model declaration) on any K8s cluster (or alike) based on any distribution. The WDM provides the connectors/APIs to interact with existing clusters in different locations &amp;amp; technologies (K8s distributions) for application deployment and lifecycle management. This component can also deploy software packages directly on virtual machines (IaaS). &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Federation&lt;br /&gt;
|This component interconnects the Multi-Cloud Orchestrator with the ones of other federated providers, enabling the customer the use of cloud edge computing services (IaaS, CaaS, PaaS, Serverless, NaaS...) across multiple providers in a seamless way, interacting with a single provider. This platform federation provides seamless integration and collaboration between multiple cloud platform providers, enabling interoperability, resource sharing, and unified lifecycle management. Shared resources may exist on all layers of cloud architecture. By adopting standardized protocols and interfaces, platform federation facilitates enhanced scalability, efficiency, and innovation across different cloud environments while maintaining autonomy and security for each participating entity. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Cloud Edge Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Access Control&lt;br /&gt;
|This component implements a key aspect in terms of security in the management of cloud edge infrastructure, a role-based access control that ensures proper access rights and security across the infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud Edge Resource Repository&lt;br /&gt;
|This component keeps a record of the resources available in each of the edge locations, the virtualization platforms available and the configuration. The information in this repository helps the multi-cloud orchestrator to select the right location(s) to deploy workloads. &lt;br /&gt;
|-&lt;br /&gt;
|Workload Inventory&lt;br /&gt;
|This component keeps record of the workloads that have been deployed and their configuration, as well as information about the location and cluster where they have been deployed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Serverless Orchestrator (FaaS)&lt;br /&gt;
|The Serverless Orchestrator provides Serverless capabilities, also known as Function as a Service (FaaS), is a cloud computing model that allows developers to build and deploy applications in the form of individual functions, which are executed in response to specific events or triggers. This model eliminates the need to manage server infrastructure, enabling developers to focus solely on writing code. Each function runs in a stateless container, automatically scaling with demand and only consuming resources when invoked, leading to cost savings and efficient resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;12&amp;quot;|Virtualization&lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Manager (BMaaS)&lt;br /&gt;
|The Hardware Resource Manager component delivers a Bare Metal as a Service (BMaaS) service. BMaaS is an abstraction that provides physical, non-virtualized hardware resources directly to users, offering dedicated servers, storage, and networking components without any virtualization layer. This service allows to harness the full power of the hardware for applications, resulting in higher performance, predictable latency, and complete control over the environment. BMaaS is particularly beneficial for workloads that require intensive computation, low-latency networking, or compliance with specific hardware configurations.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Infrastructure Manager (IaaS)&lt;br /&gt;
|The Virtual Infrastructure Manager (VIM) component provides Infrastructure as a Service (IaaS) service. IaaS is a cloud computing model that provides virtualized computing resources. IaaS delivers essential services such as virtual machines, storage, and networks. Users can provision, scale, and manage the resources dynamically according to their needs, while the cloud provider takes care of maintaining the underlying hardware, networking, and security. This model offers high flexibility, enabling organizations to quickly deploy and run applications and services, test new solutions, and handle varying workloads with ease, ultimately driving innovation and operational efficiency. &lt;br /&gt;
|-&lt;br /&gt;
|Container Infrastructure Service Manager (CaaS)&lt;br /&gt;
|The Container Infrastructure Service Manager (CISM) component provides a Container as a Service (CaaS) service. CaaS is a cloud service model that provides a platform allowing users to manage and deploy containerized applications and workloads. By leveraging container orchestration tools such as Kubernetes, CaaS facilitates the automation of container deployment, scaling, and operations, ensuring high availability and performance. This model abstracts the underlying infrastructure complexities, enabling developers and IT teams to focus on application and service development and deployment without worrying about the maintenance of the physical or virtual infrastructure. &lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Access Control&lt;br /&gt;
|As in the Cloud Edge Platform layer, this Access Control component implements virtual infrastructure management security, a role-based access control that ensures proper access rights and security for virtual resource management.&lt;br /&gt;
|-&lt;br /&gt;
|Virtual Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge sites and the configuration and availability of virtual resources in each one of them (for instance, number of k8s clusters available per site, CPU/memory available per k8s cluster, number of virtual CPUs that are available to setup new k8s clusters, ...) in order to help take decisions on workload placement. &lt;br /&gt;
|-&lt;br /&gt;
|colspan=&amp;quot;2&amp;quot;|Network Systems, SDN controllers&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Virtualization&lt;br /&gt;
|Compute&lt;br /&gt;
|Compute resources are fundamental to cloud infrastructure, delivering the computational power required for running applications and services. They facilitate scalable and efficient environments that dynamically adjust to varying workloads, thus enhancing resource utilization and performance while minimizing costs. &lt;br /&gt;
|-&lt;br /&gt;
|Storage&lt;br /&gt;
|Storage is essential in cloud infrastructure, providing data persistence, management, and accessibility. It includes block storage for databases, object storage for unstructured data, and file storage for shared access applications. Advanced technologies like SSDs and distributed file systems ensure scalability, reliability, and performance. &lt;br /&gt;
|-&lt;br /&gt;
|Networking&lt;br /&gt;
|Hardware networking resources in a cloud edge location include routers, switches, load balancers, and firewalls. These components form the backbone of data center connectivity and inter-server communication. Network Interface Cards (NICs) in servers enable high-throughput connections to the virtual network. WAN gateways and edge routers extend connectivity to external networks, supporting hybrid cloud and remote access scenarios. All hardware is managed centrally through SDN controllers and scaled dynamically to support edge cloud service demands. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Infrastructure Manager&lt;br /&gt;
|A Hardware Infrastructure Manager (also known as Data Center Infrastructure Management system, DCIM4) is a management component designed to monitor, measure, and manage the IT equipment and infrastructure within a cloud edge data center. It encompasses the following key aspects: Monitoring and Management: it provides real-time monitoring of data center operations, including power usage, cooling efficiency, and physical security. This helps in optimizing the performance and efficiency of the data center. Documentation and Planning: it maintains detailed documentation of the data center&#039;s physical and virtual assets. This includes layout planning, capacity management, and future expansion plans. Risk Management: By continuously monitoring environmental conditions and equipment status, it helps in identifying potential risks and mitigating them before they lead to failures. Integration with IT Systems: It integrates with other IT management systems to provide a holistic view of the data center&#039;s operations, facilitating better decision-making and resource allocation. Sustainability and Compliance: It supports sustainability goals by optimizing energy usage and ensuring compliance with industry standards and regulations. &lt;br /&gt;
|-&lt;br /&gt;
|Hardware Resource Repository&lt;br /&gt;
|This component keeps record of cloud edge locations and the configuration and availability of physical hardware resources in each one of them (for instance, number of servers per location, type of servers, type of NIC cards available per location, cost of resources, energy consumption of resources, ...) in order to help take decisions on workload placement and resource lifecycle management. &lt;br /&gt;
|-&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=237</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=237"/>
		<updated>2025-10-22T12:44:50Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Layers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Application Layer&lt;br /&gt;
|-&lt;br /&gt;
|Application Designer&lt;br /&gt;
|This component enables developers to design, create and customize applications using intuitive interfaces or predefined templates. It facilitates rapid development, integration and delivery of tailored applications using automated CI/CD practices (DevOps). The Application Designer facilitates the description of the application in terms of: set of application components it is made of and how they are connected (service function chain), runtime environment each of the application components will require, including the set of functions/services to support its execution, the attributes that may allow the selection of the computing node to host it (hardware requirements, latency, privacy, etc. ). &lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|The Application Packager supports the packaging of applications for its deployment in the Cloud-Edge continuum. It facilitates the automation of application deployment and update (DevOps, both traditional and AI-assisted), providing an integrated toolkit that enables quick, secure and innovative ways to deploy cloud-aware applications. It also provides tools for automatic verification and validation (CV/CT) of the application and its supply chain before its final packaging. &lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|This component provides the interface to invoke and use the applications contained in the catalog. It checks the identity and authenticates the user and checks his authorization to use the application before providing access to it. &lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|It tracks application usage and execution, monitors the performance and identifies abnormal behavior and suboptimal use of resources. &lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|It implements a directory of applications and functions that the providers have made available. They contain the characteristics of the application and the environment it requires for its execution (runtime, services, hardware characteristics). &lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|This component implements the accounting of application usage and provides online charging information for the customer to track application expenditure in real-time. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Data Layer&lt;br /&gt;
|-&lt;br /&gt;
|Data Pipelines&lt;br /&gt;
|This component provides the functionality for data collection, including the connectors to integrate with the data sources and the capabilities for data curation and pre-processing that ensure its quality and readiness for analytics, insight generation, training, modelling or inferencing phases. &lt;br /&gt;
|-&lt;br /&gt;
|Data Modelling&lt;br /&gt;
|This component enables data cataloguing to enable exposure and discovery at scale to easily search, find and browse data, over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Exposure&lt;br /&gt;
|The Data Exposure component provides customers with standard mechanisms and interfaces for safe and controlled access to data. It includes capabilities for making data offers and contract data acquisition, identity checking and data access authentication and authorization. &lt;br /&gt;
|-&lt;br /&gt;
|Data Policy Control&lt;br /&gt;
|Data Policy Control sets the required policies for data sharing, providing a safe, controlled and regulation-compliant environment for data exchange. It allows the data owner to manage the permissions to access its data: who can make it, at which conditions and for which purposes. &lt;br /&gt;
|-&lt;br /&gt;
|Data Catalog&lt;br /&gt;
|Data Catalog provides efficient storage and indexing of data to facilitate browsing, searching and finding data over a distributed environment.&lt;br /&gt;
|-&lt;br /&gt;
|Data Federation&lt;br /&gt;
|Data Federation enables standard mechanisms and interfaces (connectors) for partnering in the provision of datasets, providing a unified view of data catalogs and databases from multiple data providers. This component enables real-time data exchange across companies using data mesh principles, connecting distributed and heterogeneous actors over the cloud-edge continuum, keeping data owners in full control of their data. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Data Federation capabilities should be designed consistently with the other federation capabilities described in this document.&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|AI Layer&lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Training&lt;br /&gt;
|This component facilitates the dynamic and adjustable training of AI models across cloud and edge environments, ensuring scalability, reduced latency, and optimized resource utilization. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Inference&lt;br /&gt;
|The Inference components facilitate real-time deployment and execution of trained AI models on edge devices with efficient synchronization with the cloud for updates, monitoring, and enhancements. &lt;br /&gt;
|-&lt;br /&gt;
|Cloud-Edge Agent Manager&lt;br /&gt;
|The Cloud-Edge Agent Manager enables the deployment and management of agents and agentic workflows on edge and hybrid edge-cloud deployments creating an agentic mesh. &lt;br /&gt;
|-&lt;br /&gt;
|AI Model Catalog&lt;br /&gt;
|This component contains trained foundational models: LLMs, SLMs, multimodal LLMs, in multiple languages and managing multiple data types: text, images, video, code, etc. These models provide support for Natural Language Processing (NLP), Machine Translation (MT), speech processing, text analysis, information extraction, summarization or text and speech generation. They can be fine-tuned and adapted to specific use cases, using techniques like RAG, model quantization, pruning or distillation. The catalog contains multilingual and multimodal LLMs tailored to diverse EU languages, capable of understanding and processing diverse data types, including text, images, and multimedia. These models address the scarcity of generative AI solutions in non-English languages, ensuring semantic precision, completeness, and compliance with the AI Act. &lt;br /&gt;
|-&lt;br /&gt;
|Federated Learning&lt;br /&gt;
|AI workloads can be split across multiple nodes with central orchestration for scalability and efficiency (Distributed AI). AI Federation enables autonomous nodes to collaborate securely, ensuring privacy and sovereignty. Together, they balance task-sharing efficiency with autonomy. In distributed AI training, the AI model is generated at a central point based on the combination of models produced by different training agents distributed across a ecosystem of federated AI service providers or owners. The distributed training agents work locally on local datasets reducing the need to transfer data to a central location for training. This component allows to use and orchestrate AI resources across multiple providers to collaboratively perform a specific machine learning training task. It leverages a federated network of AI capabilities geographically distributed across the multi-provider Cloud Edge Continuum, enabling seamless resource sharing and scaling while maintaining sovereignty and compliance. It ensures efficient distribution of AI computational workloads, minimizes data movement, and facilitates parallel model training without requiring centralized data aggregation, thus preserving data privacy and autonomy while enhancing overall system performance. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Federated Learning capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|-&lt;br /&gt;
|AI Explainability&lt;br /&gt;
|This Explainable AI component ensures transparency by providing interpretable insights into AI decision-making processes. It supports compliance, accountability, and trust by enabling users and regulators to understand, audit, and validate AI models while respecting privacy and data sovereignty. &lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Service Orchestration&lt;br /&gt;
|-&lt;br /&gt;
|Service Orchestrator&lt;br /&gt;
|Service orchestration assures efficient tasks execution, load balancing and real-time operations. For example, it could communicate with the Multi-Cloud Orchestrator that manages the virtualized infrastructure layer offering a single unified environment for application development and monitoring. This allows applications and services to be deployed seamlessly across multiple platforms, optimizing resource allocation and reducing operational complexity. Alternatively, the Service Orchestrator may directly or indirectly interact with the underlying capabilities of the cloud platform or virtualization management layer to orchestrate workload execution.  The Service Orchestrator automates application and tenant deployment, and lifecycle management processes. By automating workflows (or service function chains), orchestration ensures that services communicate efficiently across the cloud-edge continuum. &lt;br /&gt;
|-&lt;br /&gt;
|Application Performance Management&lt;br /&gt;
|It monitors the performance and resource consumption of the application or service and communicates deviations from set thresholds or SLAs to the Service Orchestrator for this to take actions to recover a state that meets application requirements. It provides a unified view of states, including logging, monitoring, and alerting, for effective real-time application management and validation at runtime. &lt;br /&gt;
|-&lt;br /&gt;
|Application Repository&lt;br /&gt;
|This component tracks the applications and services that have been deployed and their configuration, the locations where the application and service components are installed and the resources they are consuming. &lt;br /&gt;
|-&lt;br /&gt;
|Service Federation&lt;br /&gt;
|This component interconnects the Service Orchestrator with those of other federated providers, enabling the deployment and execution of applications (service function chains) across multiple providers in a seamless way, interacting with a single provider. In order to create and maintain a coherent federated multi-provider Cloud Edge Continuum, Service Federation capabilities should be designed consistently with the other federation capabilities described in this document. &lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=236</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=236"/>
		<updated>2025-10-22T12:38:20Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Layers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Application Layer&lt;br /&gt;
|-&lt;br /&gt;
|Application Designer&lt;br /&gt;
|This component enables developers to design, create and customize applications using intuitive interfaces or predefined templates. It facilitates rapid development, integration and delivery of tailored applications using automated CI/CD practices (DevOps). The Application Designer facilitates the description of the application in terms of: set of application components it is made of and how they are connected (service function chain), runtime environment each of the application components will require, including the set of functions/services to support its execution, the attributes that may allow the selection of the computing node to host it (hardware requirements, latency, privacy, etc. ). &lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=235</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=235"/>
		<updated>2025-10-22T12:37:17Z</updated>

		<summary type="html">&lt;p&gt;Admin: /* Layers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;7&amp;quot;|Bread &amp;amp; Butter&lt;br /&gt;
|-&lt;br /&gt;
|Application Designer&lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=234</id>
		<title>IPCEI-CIS Reference Architecture</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IPCEI-CIS_Reference_Architecture&amp;diff=234"/>
		<updated>2025-10-22T12:36:45Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== Why a Reference Architecture? ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039; &lt;br /&gt;
&lt;br /&gt;
The digital future of Europe requires a cohesive and interoperable infrastructure, one that spans cloud and edge environments, integrates AI and data services, and ensures security, sustainability, and sovereignty across borders. To support this, the [https://www.8ra.com/ipcei-cis/ IPCEI-CIS] Reference Architecture (ICRA) defines a common framework for designing, deploying, and operating cloud-edge systems in a federated, multi-provider landscape, representing a strategic instrument within the [https://www.8ra.com/news/laying-the-groundwork-for-europes-federated-cloud-edge-future/ 8ra Initiative]. &lt;br /&gt;
&lt;br /&gt;
== Layers ==&lt;br /&gt;
&#039;&#039;The content of this section has been created by the CISERO project, the original content can be accessed here: https://cisero-project.eu/ipcei-cis-reference-architecture&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
{| class=&amp;quot;wikitable&amp;quot;&lt;br /&gt;
|-&lt;br /&gt;
|rowspan=&amp;quot;6&amp;quot;|Bread &amp;amp; Butter&lt;br /&gt;
|-&lt;br /&gt;
|Application Designer&lt;br /&gt;
|-&lt;br /&gt;
|Application Packager&lt;br /&gt;
|-&lt;br /&gt;
|API Gateway&lt;br /&gt;
|-&lt;br /&gt;
|Application Monitoring&lt;br /&gt;
|-&lt;br /&gt;
|Application Catalog&lt;br /&gt;
|-&lt;br /&gt;
|Application Accounting and Billing&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
== Mapping of Horizon Europe EUCEI RIAs on the Reference Architecture ==&lt;br /&gt;
The table below provides a schematic mapping of Research and Innovation Actions, as related to the IPCEI-CIS Reference Architecture components.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[IPCEI-CIS Reference Architecture high level::Management]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?IPCEI-CIS Reference Architecture high level&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=NexusForum.EU_Working_Groups&amp;diff=233</id>
		<title>NexusForum.EU Working Groups</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=NexusForum.EU_Working_Groups&amp;diff=233"/>
		<updated>2025-10-22T12:10:34Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=== &#039;&#039;&#039;NexusForum.EU Working Groups&#039;&#039;&#039; ===&lt;br /&gt;
The main objective of the Working Groups is to &#039;&#039;&#039;collect contributions and feedback&#039;&#039;&#039; to the roadmap from relevant &#039;&#039;&#039;EU industry experts and researchers&#039;&#039;&#039;.  The  NexusForum.EU thematic Working Groups are based on the main sections of the Research and Innovation roadmap. These working groups are aligned with two European strategic initiatives: the European Alliance for Industrial Data, Edge and Cloud and the Important Project of Common European Interest on Next Generation Cloud Infrastructure and Services (IPCEI-CIS).&lt;br /&gt;
&lt;br /&gt;
The &#039;&#039;&#039;Working Groups&#039;&#039;&#039; are organised in three main categories: &#039;&#039;&#039;European Alliance for Industrial Data, Edge and Cloud, IPCEI-CIS&#039;&#039;&#039; and &#039;&#039;&#039;International cooperation.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;European Alliance for Industrial Data, Edge and Cloud&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Sovereignty &amp;amp; Open Source]]&#039;&#039;&#039; =====&lt;br /&gt;
The Sovereignty Working Group and Open Source´s scope is to focus on moving towards European digital sovereignty, bolstering European digital capabilities and skills development related to computing technologies. &lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Alberto P. Martí]] and [[Sachiko Muto]].  &lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Sustainability]]&#039;&#039;&#039; =====&lt;br /&gt;
The Sustainability Working Group is centered around Data Centre energy-/resource-efficiency, efficiency metrics, circular economy in the data center industry and data platforms to enable decarbonization.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Antje Raetzer Scheibe]] and [[Jon Summers]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Interoperability]]&#039;&#039;&#039; =====&lt;br /&gt;
The Interoperability Working Group on Interoperability is set to focus on APIs, standards for interoperability, meta-orchestration and federation; federation of distributed cloud and edge computing resources, abstraction layers and standardization for meta-orchestration and workload optimization in multi-provider federations. And finally, on interoperability across cloud\edge platforms, providers, highlighting bare-meta, IaaS, PaaS layers. &lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Gorka Benguria]] and [[Lukas Rybok]]. &lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Cybersecurity]]&#039;&#039;&#039; =====&lt;br /&gt;
The Cybersecurity Working Group´s scope is to work on the themes of zero trust, identity management, privacy, end-to-end encryption/confidentiality, public key infrastructure, security protocols and standards and risk assessment.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Iraklis Symeonidis]] and [[Arthur van der Wees]].&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;IPCEI-CIS&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Cloud-Edge Use Cases]]&#039;&#039;&#039; =====&lt;br /&gt;
The Cloud-Edge Use Cases Working Group deals with use cases and applications that could take advantage of the continuum, such as Mobility, Transport and Travel, Energy, Manufacturing and Industry 4.0/5.0, Health, Infrastructures, Smart Buildings &amp;amp; Cities, Tourism &amp;amp; Cultural Heritage, Agriculture &amp;amp; Environment, and Media. This Working Group will contribute to the initial roll-out of next generation use cases as part of a first industrial deployment with European wide scale, showcasing data processing in different sectors to verify functionality, high scalability, interoperability, portability, interconnectivity and compatibility.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Giovanni Frattini]] and [[Dimosthenis Kyriazis]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[AI for Cloud-Edge]]&#039;&#039;&#039; =====&lt;br /&gt;
The AI for Cloud-Edge Working Group works on Meta-orchestration, federation of multi-cloud, IaaS, integration monitoring etc. of continuum, everything about OS and virtualization, containers, hypervisors, virtual networks, virtual storage. The scope will extend to services for serverless services spanning edge-cloud-HPC, service management, application lifecycle orchestration, data services and to the development of infrastructure related services to run on the multi-provider cloud-edge continuum is the basis for real time data services with ultra-low latency and the load balancing for optimised utilization. This will enable sorting, interpreting and prioritizing the storage and processing capabilities of large amounts of data in advance as close as possible to the place of origin and/or consumption of that data.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Antonio Álvarez]] and [[Ian Marsh]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Cloud-Edge for AI]]&#039;&#039;&#039; =====&lt;br /&gt;
The Cloud-Edge for AI Working Group works on Dataspaces, Data exchange (advanced capabilities), AI tools for FL and lifecycle of AI models, support data-driven applications, digital twins, application deployment, things that need to run on a cloud-edge infrastructure, require complex orchestration, identity and observability across multi-provider continuum. Furthermore, the scope of the WG consists in providing integrated services such as application lifecycle management to build, deploy and maintain apps all over the cloud-edge continuum – platform services -; data management to ease data ingestion, transformation and analysis in a multi-provider, federated environment in accordance with European regulation – data platform; and innovative data processing leveraging AI and ML – smart processing services.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Björn Forsberg]] and [[Antal Kuthy]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[Telco Cloud-Edge]]&#039;&#039;&#039; =====&lt;br /&gt;
The Working Group on Telco Cloud-Edge deals with Open reference architectures for open cloud and edge infrastructure, standards for data centers, AI-based predictive maintenance of datacenters and cloud &amp;amp; edge infrastructure, networking, time synchronization, requirements across providers, security, automated operations, APIs for bare-metal as a service. Furthermore, it will work on optimized data centre design for cloud and edge, advanced simulation and prediction capabilities , security and accessibility of physical infrastructure, open hardware, operating systems, operation management &amp;amp; monitoring, connectivity, network orchestration with multi-cloud orchestration, edge infrastructure deployment, network functions at the edge, connectivity between cloud and edge, edge connectivity at scale. The Working Group will contribute to the setting up of an appropriate and supported next generation infrastructure to manage the technological complexity of the meshed continuum. Lastly, its aim is to develop and set up physical and logical linking of networks including integrated smart network services for the cloud-edge continuum. This will enable the entire network to combine cloud-edge computing processes and data transfer throughout the EU.&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[David Artuñedo]] and [[Anders Lindgren]].&lt;br /&gt;
&lt;br /&gt;
==== &#039;&#039;&#039;International cooperation&#039;&#039;&#039; ====&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[EU-Japan Cooperation]]&#039;&#039;&#039; =====&lt;br /&gt;
The EU-Japan Cooperation Working Group is meant to foster international collaboration based on this future association agreements of the Horizon Europe programme. This Working Group is an opportunity for experts in the industry and research fields to discuss any complementary aspects that can result in better cooperation between the European Union and Japan in terms of technical capabilities, policy and legal aspects.&lt;br /&gt;
&lt;br /&gt;
Discussion evolve around, for example: what are key priorities towards which both the EU and Japan are willing to commit ? In which domains can effective cooperation take place?&lt;br /&gt;
&lt;br /&gt;
The Working Group co-leaders are [[Wiktoria Bochenska]] and [[Kazuyuki Shimizu]].&lt;br /&gt;
&lt;br /&gt;
===== &#039;&#039;&#039;[[EU-Korea Cooperation]]&#039;&#039;&#039; =====&lt;br /&gt;
The EU-Korea Cooperation Working Group is meant to foster international collaboration based on this future association agreements of the Horizon Europe programme. This Working Group is an opportunity for experts in the industry and research fields to discuss any complementary aspects that can result in better cooperation between the European Union and the Republic of South Korea in terms of technical capabilities, policy and legal aspects. Discussion evolve around, for example: what are key priorities towards which both the EU and Korea are willing to commit? In which domains can effective cooperation take place?&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Research_and_Innovation_Actions&amp;diff=232</id>
		<title>Research and Innovation Actions</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Research_and_Innovation_Actions&amp;diff=232"/>
		<updated>2025-10-22T12:07:45Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an overview of the Research and Innovation Actions funded by the European Union Horizon 2020 and Horizon Europe, with specific reference to the projects under the EUCloudEdgeIoT umbrella.  &lt;br /&gt;
&lt;br /&gt;
The project listing is constantly updated, and will be linked to external repositories to further expand the knowledge base available in this Wiki.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[ItemType::EU Project]]&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?CORDIS URL&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Research_and_Innovation_Actions&amp;diff=231</id>
		<title>Research and Innovation Actions</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Research_and_Innovation_Actions&amp;diff=231"/>
		<updated>2025-10-22T12:07:26Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an overview of the Research and Innovation Actions funded by the European Union Horizon 2020 and Horizon Europe, with specific reference to the projects under the EUCloudEdgeIoT umbrella.  &lt;br /&gt;
&lt;br /&gt;
The project listing is constantly updated, and will be linked to external repositories to further expand the knowledge base available in this Wiki.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[ItemType::EU Project]]&lt;br /&gt;
|?EU Project short name&lt;br /&gt;
|?Programme&lt;br /&gt;
|?EuroVoc ID&lt;br /&gt;
|?CORDIS URL&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Research_and_Innovation_Actions&amp;diff=230</id>
		<title>Research and Innovation Actions</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Research_and_Innovation_Actions&amp;diff=230"/>
		<updated>2025-10-22T12:06:53Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;This page provides an overview of the Research and Innovation Actions funded by the European Union Horizon 2020 and Horizon Europe, with specific reference to the projects under the EUCloudEdgeIoT umbrella.  &lt;br /&gt;
&lt;br /&gt;
The project listing is constantly updated, and will be linked to external repositories to further expand the knowledge base available in this Wiki.&lt;br /&gt;
&lt;br /&gt;
{{#ask:&lt;br /&gt;
[[ItemType::EU Project]]&lt;br /&gt;
|?EU Project short name&lt;br /&gt;
|?Programme&lt;br /&gt;
|?CORDIS URL&lt;br /&gt;
|format=table&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=XANDAR&amp;diff=229</id>
		<title>XANDAR</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=XANDAR&amp;diff=229"/>
		<updated>2025-10-22T12:05:25Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::XANDAR]]==&lt;br /&gt;
===[[EU Project full name::X-by-Construction Design framework for Engineering Autonomous &amp;amp; Distributed Real-time Embedded Software Systems]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/957210]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
The next generation of networked embedded systems (ES) necessitates rapid prototyping and high performance while maintaining key qualities like trustworthiness and safety. However, deployment of safety-critical ES suffers from complex software (SW) toolchains and engineering processes. Moreover, the current trend in autonomous systems relying on Machine Learning (ML) and AI applications in combination with fail-operational requirements renders the Verification and Validation (V&amp;amp;V) of these new systems a challenging endeavor. Prime examples are autonomous driving cars that are prone to various safety/security vulnerabilities. The XANDAR project is built to exactly match the goals defined within the ICT-50 Software Technologies.XANDAR will deliver a mature SW toolchain (from requirements capture down to the actual code integration on target including V&amp;amp;V) fulfilling the needs of the industry for rapid prototyping of interoperable and autonomous ES. Starting from a model-based system architecture, XANDAR will leverage novel automatic model synthesis and software parallelization techniques to achieve specific non-functional requirements setting the foundation for a novel real-time, safety-, and security-by-Construction (X-by-Construction) paradigm. For the first time, XbC-guided code generation for non-deterministic ML/AI applications will be combined with novel runtime monitors to ensure fail-operation in the presence of runtime faults and security exploitations. The project provides a consortium covering the full spectrum of ES and software engineering. XANDAR will be validated by an automotive OEM (BMW) and the German Aerospace Center (DLR). Leading European SMEs and enterprises such as Vector, AVN, and fentISS as well as successful academic partners will contribute their diverse knowhow in Model-Driven Engineering, Software Systems and V&amp;amp;V, multicore architectures, code generation, and security enforcements from higher-level behavioral models to actual runnables.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/computer and information sciences/software/software applications/system software]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon 2020]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Vitamin-V&amp;diff=228</id>
		<title>Vitamin-V</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Vitamin-V&amp;diff=228"/>
		<updated>2025-10-22T12:05:16Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::Vitamin-V]]==&lt;br /&gt;
===[[EU Project full name::Virtual Environment and Tool-boxing for Trustworthy Development of RISC-V based Cloud Services]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101093062]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
Vitamin-V aims to develop a complete RISC-V open-source software stack for cloud services with iso-performance to the cloud-dominant x86 counterpart and a powerful virtual execution environment for software development, validation, verification, and test that considers the relevant RISC-V ISA extensions for cloud deployment.Specifically, commercial cloud systems make use of hardware features that are currently unavailable in RISC-V virtual environments (not to mention the lack of specific RISC-V hardware). These features include the virtualization, cryptography and vectorization for which Vitamin-V will add support in three virtual environments: QEMU, gem5 and cloud-FPGA prototype platforms. Vitamin-V focuses and will provide support for EPI-based RISC-V designs for both the main CPUs and cloud-important accelerators (for memory compression). We will add the compiler (LLVM-based) and toolchain support for the ISA extensions. Moreover, novel approaches for the validation, verification, and test of software trustworthiness will be developed considering.Vitamin-V will port and evaluate several cutting-edge VMMs and container suites (i.e. VOSySmonitor, KVM, QEMU, Docker, RustVMM, Kata containers), cloud management software (i.e., OpenStack, and Kubernetes) together with their software and libraries dependencies (e.g. JVM, Python); and AI (i.e Tensorflow) and BigData applications (Apache Spark). These software suites are representative of the three cloud setups that will be demonstrated: classical (OpenStack), modern (Kubernetes), and serverless (RustVMM, Kata, Kubernetes). The cloud setups will be benchmarked against relevant AI (i.e., Google Net, ResBet, VGG19), BigData (TPC-DS), and Serverless applications (FunctionBench, ServerlessBench). Vitamin-V aims to match the software performance of its x86 equivalent while contributing to RISC-V open-source virtual environments, software validation and cloud software suites.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/computer and information sciences/computer security/cryptography]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=VeriDevOps&amp;diff=227</id>
		<title>VeriDevOps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=VeriDevOps&amp;diff=227"/>
		<updated>2025-10-22T12:05:07Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::VeriDevOps]]==&lt;br /&gt;
===[[EU Project full name::VeriDevOps]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/957212]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
VeriDevOps is about fast, flexible system engineering that efficiently integrates development, delivery, and operations, thus aiming at quality deliveries with short cycle time to address ever evolving challenges. Current system development practices are increasingly based on using both off-the-shelf and legacy components which make such systems prone to security vulnerabilities. Since DevOps is promoting frequent software deliveries, verification methods artifacts should be updated in a timely fashion to cope with the pace of the process. VeriDevOps aims at providing faster feedback loop for verifying the security requirements  i.e. confidentiality, integrity, availability, authentication, authorization and other quality attributes of large scale cyber-physical systems. VeriDevOps is focusing on optimizing the security verification activities, by automatically creating verifiable models directly from security requirements, and using these models to check security properties on design models and generate artefacts (such as tests or monitors) that can be used (later on) in the DevOps process. More concretely, we will develop methods and tools for: 1) creating security models from textual specifications using natural language processing, 2) automatic security test creation from security models using model-based testing and model-based mutation testing techniques and 3) generating (intelligent/adaptive, ML-based) security monitors for the operational phases. This brings together early security verification through formal modelling as well as test generation, selection, execution and analysis capabilities to enable companies to deliver quality systems with confidence in a fast-paced DevOps environment. Overall, VeriDevOps is using the results of formal verification of security requirements and design models created during the analysis and design phase for test and monitor generation to be used to enhance the feedback mechanisms during development and operation phases.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/computer and information sciences/software]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Terminet&amp;diff=226</id>
		<title>Terminet</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Terminet&amp;diff=226"/>
		<updated>2025-10-22T12:04:57Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::Terminet]]==&lt;br /&gt;
===[[EU Project full name::nexT gEneRation sMart INterconnectEd ioT]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/957406]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
The vision of TERMINET is to provide a novel next generation reference architecture based on cutting-edge technologies such as SDN, multiple-access edge computing, and virtualisation for next generation IoT, while introducing new, intelligent IoT devices for low-latency, market-oriented use cases. TERMINET’s primary intention is to bring (more efficient and accurate) decisions to the point of interest to better serve the final user targeting at applying distributed AI at the edge by using accelerated hardware and sophisticated software to support local AI model training using federated learning. Our solution aspires to reduce the complexity of the connecting vast number of heterogeneous devices through a flexible SDN-enabled middleware layer. It also aims to design, develop, and integrate novel, intelligent IoT devices such as smart glasses, haptic devices, energy harvesting modules, smart animal monitoring collars, AR/VR environments, and autonomous drones, to support new market-oriented use cases. Great expectation of the proposal is to foster AR/VR contextual computing by demonstrating applicable results in realistic use cases by using cutting-edge IoT-enabled AR/VR applications. By designing and implementing an IoT-driven decentralised and distributed blockchain framework within manufacturing, TERMINET aims to support maintenance and supply chain optimisation. Our solution intends to apply a vertical security by design methodology by meeting the privacy-preserving and trust requirements of the NG-IoT architecture. To foster standardisation activities for the IoT ecosystem, TERMINET will provide novel disruptive business models. For the evaluation of its wide applicability, TERMINET will validate and demonstrate six proof-of-concept, realistic use cases in compelling IoT domains such as the energy, smart buildings, smart farming, healthcare, and manufacturing.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/computer and information sciences/internet/internet of things]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=TaRDIS&amp;diff=225</id>
		<title>TaRDIS</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=TaRDIS&amp;diff=225"/>
		<updated>2025-10-22T12:04:48Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::TaRDIS]]==&lt;br /&gt;
===[[EU Project full name::Trustworthy and Resilient Decentralised Intelligence for Edge Systems]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101093006]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
Developing and managing distributed systems is a complex task requiring expertise across multiple domains. This complexity considerably increases in swarm systems, which are highly dynamic and heterogeneous and require decentralised solutions that adapt to highly dynamic system conditions. The project TaRDIS focuses on supporting the correct and efficient development of applications for swarms and decentralised distributed systems, by combining a novel programming paradigm with a toolbox for supporting the development and executing of applications.TaRDIS proposes a language-independent event-driven programming paradigm that exposes, through an event-based interface, distribution abstractions and powerful decentralised machine learning primitives. The programming environment will assist in building correct systems by taking advantage of behavioural types to automatically analyse the component&#039;s interactions to ensure correctness-by-design of their applications, taking into account application invariants and the properties of the target execution environment. TaRDIS underlying distributed middleware will provide essential services, including data management and decentralised machine learning components. The middleware will hide the heterogeneity and address the dynamicity of the distributed execution environment by orchestrating and adapting the execution of different application components across devices in an autonomic and intelligent way. TaRDIS results will be integrated in a development environment, and also as standalone tools, both of which can be used for developing applications for swarm systems.The project results will be validated in the context of four different use cases provided by high impact industrial partners that range from swarms of satellites, decentralised dynamic marketplaces, decentralised machine learning solutions for personal-assistant applications, and the distributed control process of a smart factory.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/engineering and technology/mechanical engineering/vehicle engineering/aerospace engineering/satellite technology]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=SmartEdge&amp;diff=224</id>
		<title>SmartEdge</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=SmartEdge&amp;diff=224"/>
		<updated>2025-10-22T12:04:32Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::SmartEdge]]==&lt;br /&gt;
===[[EU Project full name::Semantic Low-code Programming Tools for Edge Intelligence]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101092908]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
The objective of the SMARTEDGE project is to enable the dynamic integration of decentralised edge intelligence at runtime while ensuring reliability, security, privacy and scalability. We will achieve this by enabling a semantic-based interplay of the edge devices of such systems via a cross-layer toolchain that facilitates the seamless and real-time discoverability and composability of autonomous intelligence swarm. Hence, an application can be freely built by distributing the processing, data fusion and control across heterogeneous sensors, devices and edges with ubiquitous low-latency connectivity. The goal of this project is to develop a SMARTEDGE solution with a low-code tool programming environment with various tools: (1) Continuous Semantic Integration (CSI); (2) Dynamic Swarm Network (DSW); and (3) Low-code Toolchain for Edge Intelligence. CSI allows the SMARTEDGE solution to interact with devices according to a (i) standardized semantic interface, via a (ii) continuous conversion process based on declarative mappings and scalable from edge to cloud, and (iii) providing a declarative approach for the creation and orchestration of apps based on swarm intelligence. DSW provides (i) automatic discovery and dynamic network swarm formation in near real time, (ii) hardware-accelerated in-network operations for context-aware swarm networking, and (iii) embedded network security. The low-code tool chain provides (i) semantic-driven multimodal stream fusion for Edge devices; (ii) swarm elasticity via Edge-Cloud Interplay; (iii) adaptive coordination and optimization; (iv) cross-layer toolchain for Device-Edge-Cloud Continuum. The SMARTEDGE solution will be comprehensively demonstrated  over four application areas: automotive, city,  factory and heath via the strong collaboration of eight industrial partners, Dell, Siemens, Bosch, IMC, Conveq, Cefiel and NVIDIA with eight research institutes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/engineering and technology/electrical engineering, electronic engineering, information engineering/information engineering/telecommunications/telecommunications networks/data networks]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Serrano&amp;diff=223</id>
		<title>Serrano</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Serrano&amp;diff=223"/>
		<updated>2025-10-22T12:04:23Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::Serrano]]==&lt;br /&gt;
===[[EU Project full name::TRANSPARENT APPLICATION DEPLOYMENT IN A SECURE, ACCELERATED AND COGNITIVE CLOUD CONTINUUM]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101017168]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
SERRANO’s overall ambition is to introduce a novel ecosystem of cloud-based technologies, spanning from specialized hardware resources up to software toolsets. This will enable application-specific service instantiation and optimal customizations based on the workloads to be processed, in a holistic manner, thus supporting highly demanding, dynamic and security-critical applications. SERRANO is not only tuned and fully aligned with current trends in the cloud computing sector towards the expansion of cloud infrastructures so as to efficiently integrate edge resources, but it also integrates transparently HPC resources ir to provide an infrastructure that goes beyond the scope of the “normal” cloud and realizes a true computing continuum. SERRANO introduces an abstraction layer that transforms the distributed edge, cloud and HPC resources into a single borderless infrastructure, while it also facilitates their automated and cognitive orchestration. It proposes the introduction and evolution of novel key concepts and approaches that aim to close existing technology gaps, towards the realization of advanced infrastructures, able to meet the stringent requirements of future applications and services. It will develop technologies and mechanisms related to security and privacy in distributed computing and storage infrastructures, hardware and software acceleration on cloud and edge, cognitive resource orchestration, dynamic data movement and task offloading between edge/cloud/HPC, transparent application deployment, energy-efficiency and real-time and zero-touch adaptability. Finally, to highlight the proposed ecosystem’s scientific and technological significance, SERRANO will demonstrate three high impact use cases related to (i) secure cloud and edge storage over a diversity of cloud resources, (ii) fintech by supporting latency-sensitive and safety-critical digital services in the financial sector and (iii) machine anomaly detection in manufacturing for Industry 4.0&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/computer and information sciences/software]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon 2020]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Riser&amp;diff=222</id>
		<title>Riser</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Riser&amp;diff=222"/>
		<updated>2025-10-22T12:04:14Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::Riser]]==&lt;br /&gt;
===[[EU Project full name::RISC-V for Cloud Services]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101092993]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
Building on top of outcomes from the EPI and EUPilot projects, RISER will develop the first all-European RISC-V cloud server infrastructure, significantly enhancing Europe&#039;s open strategic autonomy. RISER will leverage and validate open hardware high-speed interfaces combined with a fully-featured operating system environment and runtime system, enabling the integration of low-power components, including the RISC-V processor chips from EPI and EUPilot and LPDDR4 memories, in a novel energy-efficient cloud architecture.RISER brings together a set of 7 partners from industry and academia to jointly develop and validate open-source designs for standardized form-factor system platforms suitable for supporting cloud services. Specifically, RISER will build the following two cloud infrastructures:(1) An accelerator platform, which includes the ARM-based RHEA processor from EPI and a PCIe acceleration board that will be developed within the project which will integrate up-to four RISC-V based EPI and EUPilot chips.(2) A microserver platform, which interconnects up to ten microserver boards all developed by the project, each one supporting up to four RISC-V chips coupled with high-speed storage and networking. Embracing hyperconvergence, the microserver architecture will allow for distributed storage and memory to be used by any processor in the system with very low overhead.The open-source system board designs of RISER will also be accompanied by open-source low-level firmware and systems software, and a representative Linux-based software stack to support cloud services. To evaluate and demonstrate the capabilities of the RISER platforms we will develop three use cases: (a) Acceleration of compute workloads, (b) Networked object and key-value storage, and (c) Containerized execution as part of a provider-managed IaaS environment. RISER will offer open access to the microserver platform, facilitating uptake and enhancing the commercialization path of project results.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/computer and information sciences/software]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=P2CODE&amp;diff=221</id>
		<title>P2CODE</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=P2CODE&amp;diff=221"/>
		<updated>2025-10-22T12:04:05Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::P2CODE]]==&lt;br /&gt;
===[[EU Project full name::Programming Platform for Intelligent Collaborative Deployments over Heterogeneous Edge-IoT Environments]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101093069]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
P2CODE envisions the design and development of an open platform for the deployment and dynamic management of end user applications, over distributed, heterogeneous and trusted IoT-Edge node infrastructures, with enhanced programmability features and tools at both the network infrastructure level and the service design and operational level. The platform is implemented following three innovative design approaches: i) The deployment and management of the applications is conducted by an orchestration framework that follows a vertical layered approach from the end user interface to the infrastructure management while spanning horizontally across the device-edge-core-cloud continuum. The deployment follows the user-defined networking and operational features of the application in its northbound interface and a tight integration with state-of-the-art IoT, edge/cloud computing, and networking platforms in its southbound interface through a well-define driver API framework. With this approach the full programmability and reconfigurability of resources across the continuum is enabled. ii) An open and extensible, programming toolset facilitates application development and deployment for large swarms of devices at the edge through a multi-role Internal Developer Platform (IDP) and new feature development and testing, iii) A secure and trusted framework for registering and authenticating IoT device and edge nodes entering the system as well as the data sharing and application deployment. The concept is tested and validated over a mature testing environment that integrates diverse IoT application areas in smart logistics, manufacturing, utility inspection, and community PPDR over a programable infrastructure extended to O-RAN, 5G, SDN enable core Cloud. The consortium addresses all the required development sectors from the platform technology innovations, to supported IoT infrastructure and applications, including the end user interfacing and resource management intelligence.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/engineering and technology/electrical engineering, electronic engineering, information engineering/information engineering/telecommunications/telecommunications networks/mobile network/5G]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=OpenSwarm&amp;diff=220</id>
		<title>OpenSwarm</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=OpenSwarm&amp;diff=220"/>
		<updated>2025-10-22T12:03:54Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::OpenSwarm]]==&lt;br /&gt;
===[[EU Project full name::Orchestration and Programming ENergy-aware and collaborative Swarms With AI-powered Reliable Methods]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101093046]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
Low-power wireless technology tends to be used today for simple monitoring applications, in which raw sensor data is reported periodically to a server for analysis. The ambition of the OpenSwarm project is to trigger the next revolution in these data-driven systems by developing true collaborative and distributed smart nodes, through groundbreaking R&amp;amp;I in three technological pillars: efficient networking and management of smart nodes, collaborative energy-aware Artificial Intelligence (AI), and energy-aware swarm programming. Results are implemented in an open software package called “OpenSwarm”, which is verified in our labs on two 1,000 node testbeds. OpenSwarm is then validated in five real-world proof-of-concept use cases, covering four application domains: Renewable Energy Community (Cities &amp;amp; Community), Supporting Human Workers in Harvesting (Environmental), Ocean Noise Pollution Monitoring (Environmental), Health and Safety in Industrial Production Sites (Industrial/Health), Moving Networks in Trains (Mobility). A comprehensive dissemination, exploitation, and communication plan (including a diverse range of activities related to standardization, educational and outreach, open science, and startup formations) amplifies the expected impacts of OpenSwarm, achieving a step change enabling novel, future energy-aware swarms of collaborative smart nodes with wide range benefits for the environment, industries, and society.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/computer and information sciences/software]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=OpenCUBE&amp;diff=219</id>
		<title>OpenCUBE</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=OpenCUBE&amp;diff=219"/>
		<updated>2025-10-22T12:03:40Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::OpenCUBE]]==&lt;br /&gt;
===[[EU Project full name::Open-Source Cloud-Based Services on EPI Systems]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101092984]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
This project proposes to design OpenCUBE, a full-stack solution of a validated European Cloud computing blueprint to be deployed on European hardware infrastructure. OpenCUBE will develop a custom cloud installation with the guarantee that an entirely European solution like SiPearl processors and Semidynamics RISC-V accelerators can be deployed reproducibly. OpenCUBE will be built on industry-standard open APIs using Open Source components and will provide a unified software stack that captures the different best practices and open source tooling on the operating system, middleware, and system management level. It will thus provide a solid basis for the European cloud services, research, and commercial deployments envisioned to be core for federated digital services via Gaia-X. To remain competitive for the European Green Deal, OpenCUBE is designed to make energy awareness a core feature at all levels of the stack, exploiting the advanced features of the SiPearl Rhea processor family at the hardware level and exposing the necessary API at the site level, up to and including interfaces to the electricity grid. This project will leverage representative workloads like those of ECMWF characteristics for production and Digital Twin workflows as drivers for the design and deployment of the cluster infrastructure. We will collaborate closely with the projects developing the virtual environments and the open hardware interfaces for current and future European processor and coprocessor technology.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/computer and information sciences/software]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=OASEES&amp;diff=218</id>
		<title>OASEES</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=OASEES&amp;diff=218"/>
		<updated>2025-10-22T12:03:29Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::OASEES]]==&lt;br /&gt;
===[[EU Project full name::Open Autonomous programmable cloud appS &amp;amp; smart EdgE Sensors]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101092702]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
The massive increase in device connectivity and generated data has resulted in the proliferation of intelligent processing services to create insights and exploit data in a multi-modal manner. Currently, the most powerful data processing operates in a centralized manner at the cloud, which provides the ability to scale and allocate resources on demand and efficiently. Centralized processing and cloud hosting, bound and limit their services and applications to operate in a resource restricted manner, relying usually on large single entities to provide, i) Authentication, ii) Data storage, iii) Data processing, iv) Connectivity, v) Vendor-locked environments for development and orchestration. This significantly limits the user from its data governance and even identity management. Similarly, existing solutions for edge device authentication require a centralized entity to trust them and authenticate them, rendering a non-portable identification paradigm. OASEES aims to create an open, decentralized, intelligent, programmable edge framework for Swarm architectures and applications, leveraging the Decentralized Autonomous Organization (DAO) paradigm and integrating Human-in-the-Loop (HITL) processes for efficient decision making. The OASEES vision is to provide the open tools and secure environments for swarm programming and orchestration for numerous fields, in a completely decentralized manner. An important aspect in this process is identification and identity management, in which OASEES targets the implementation of a portable and privacy preserving ID federation system, for edge devices and services, with full compliance and compatibility to GAIA-X federation and IDSA trust directives and specifications. This situation solidifies the need for an integrated enabler framework tailored to the edge’s extreme data processing demands, using different edge accelerators, i.e. GPU, NPU, SNN and Quantum.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/engineering and technology/electrical engineering, electronic engineering, information engineering/electronic engineering/computer hardware/quantum computers]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Nephele&amp;diff=217</id>
		<title>Nephele</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Nephele&amp;diff=217"/>
		<updated>2025-10-22T12:03:19Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::Nephele]]==&lt;br /&gt;
===[[EU Project full name::A LIGHTWEIGHT SOFTWARE STACK AND SYNERGETIC META-ORCHESTRATION FRAMEWORK FOR THE NEXT GENERATION COMPUTE CONTINUUM]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101070487]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
The vision of NEPHELE is to enable the efficient, reliable and secure end-to-end orchestration of hyper-distributed applications over programmable infrastructure that is spanning across the compute continuum from Cloud-to-Edge-to-IoT, removing existing openness and interoperability barriers in the convergence of IoT technologies against cloud and edge computing orchestration platforms, and introducing automation and decentralized intelligence mechanisms powered by 5G and distributed AI technologies.The NEPHELE project aims to introduce two core innovations, namely: (i) an IoT and edge computing software stack for leveraging virtualization of IoT devices at the edge part of the infrastructure and supporting openness and interoperability aspects in a device-independent way. Through this software stack, management of a wide range of IoT devices and platforms can be realised in a unified way, avoiding the usage of middleware platforms, while edge computing functionalities can be offered on demand to efficiently support IoT applications’ operations.(ii) a synergetic meta-orchestration framework for managing the coordination between cloud and edge computing orchestration platforms, through high-level scheduling supervision and definition, based on the adoption of a “system of systems” approach.The NEPHELE outcomes are going to be demonstrated, validated and evaluated in a set of use cases across various vertical industries, including areas such as disaster management, logistic operations in ports, energy management in smart buildings and remote healthcare services. Two successive open calls will also take place, while a wide open-source community is envisaged to be created for supporting the NEPHELE outcomes.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/computer and information sciences/internet/internet of things]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Nemo&amp;diff=216</id>
		<title>Nemo</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Nemo&amp;diff=216"/>
		<updated>2025-10-22T12:03:09Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::Nemo]]==&lt;br /&gt;
===[[EU Project full name::Next Generation Meta Operating System]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101070118]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
NEMO aims to establish itself as the game changer of AIoT-Edge-Cloud Continuum by introducing an open source, flexible, adaptable, cybersecure and multi-technology meta-Operating System, sustainable during and after the end of the project, via the Eclipse foundation (NEMO consortium member). To achieve technology maturity and massive adoption, NEMO will not “reinvent the wheel”, but leverage and interface existing systems, technologies and Open Standards, and introduce novel concepts, tools, testing facilities/Living Labs and engagement campaigns to go beyond today’s state of the art, make breakthrough research and create sustainable innovation, already within the project lifetime.NEMO will introduce innovations at different layers of the protocol stack, enabling on-device Cybersecure Federated ML/DRL, deliver time-triggered (TSN) multipath ad-hoc/hybrid self-organized and zero-delay failback/self-healing multi-cloud clusters, multi-technology Secure Execution Environment and on-Service Level Objectives meta-Orchestrator, Plugin and Apps Lifecycle Management and Intent Based programming tools. Moreover, NEMO will be “by design” and “by innovation” cybersecure and trusted adopting state of the art mechanisms such as Mutual TLS and Digital Identity Attestation.NEMO will be validated in 5 most prominent industrial sectors (i.e. Farming, Energy, Mobility/City, Industry 4.0 and Media/XR) and 8 use cases in 5 +1 Living Labs,  utilizing more than 30 heterogenous IoT devices and real 5G infrastructure. The impact will not only safeguard EU position in data economy and applications verticals, but lower energy efficiency, reduce pesticides and CO2 footprint.Beyond Eclipse adoption, NEMO sustainability, wide acceptance and SMEs engagement will be achieved via open source ecosystems, standardization initiatives and 2 Open Calls that will provide financial support of 1,8M€ and access to NEMO Living Labs to SMEs and enlarge NEMO by at least 16 new entities.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/engineering and technology/electrical engineering, electronic engineering, information engineering/information engineering/telecommunications/telecommunications networks/mobile network/5G]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=NebulOuS&amp;diff=215</id>
		<title>NebulOuS</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=NebulOuS&amp;diff=215"/>
		<updated>2025-10-22T12:02:57Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::NebulOuS]]==&lt;br /&gt;
===[[EU Project full name::A META OPERATING SYSTEM FOR BROKERING HYPER-DISTRIBUTED APPLICATIONS ON CLOUD COMPUTING CONTINUUMS]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101070516]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
NebulOus will accomplish substantial research contributions in the realms of cloud and fog computing brokerage by introducing advanced methods and tools for enabling secure and optimal application provisioning and reconfiguration over the cloud computing continuum. NebulOus will develop a novel Meta Operating System and platform for enabling transient fog brokerage ecosystems that seamlessly exploit edge and fog nodes, in conjunction with multi-cloud resources, to cope with the requirements posed by low latency applications. The envisaged BRONCO solution includes the following main directions of work: i.Development of appropriate modelling methods and tools for describing the cloud computing continuum, application requirements, and data streams; these methods and tools will be used for assuring the QoS of the provisioned brokered services. ii.Efficient comparison of available offerings, using appropriate multi-criteria decision-making methods that are able to consider all dimensions of consumer requirements. iii.Intelligent applications, workflows and data streams management in the cloud computing continuum.iv.Addressing in a unified manner the security aspects emerging in of transient cloud computing continuums (e.g., access control, secure network overlay etc.). v.Conducting and monitoring smart contracts-based service level agreements.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/biological sciences/ecology/ecosystems]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=MLSysOps&amp;diff=214</id>
		<title>MLSysOps</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=MLSysOps&amp;diff=214"/>
		<updated>2025-10-22T12:02:43Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::MLSysOps]]==&lt;br /&gt;
===[[EU Project full name::Machine Learning for Autonomic System Operation in the Heterogeneous Edge-Cloud Continuum]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101092912]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
MLSysOps will achieve substantial research contributions in the realm of AI-based system adaptation across the cloud-edge continuum by introducing advanced methods and tools to enable optimal system management and application deployment. MLSysOps will design, implement and evaluate a complete framework for autonomic end-to-end system management across the full cloud-edge continuum. MLSysOps will employ a hierarchical agent-based AI architecture to interface with the underlying resource management and application deployment/orchestration mechanisms of the continuum. Adaptivity will be achieved through continual ML model learning in conjunction with intelligent retraining concurrently to application execution, while openness and extensibility will be supported through explainable ML methods and an API for pluggable ML models. Flexible/efficient application execution on heterogeneous infrastructures and nodes will be enabled through innovative portable container-based technology. Energy efficiency, performance, low latency, efficient, resilient and trusted tier-less storage, cross-layer orchestration including resource-constrained devices, resilience to imperfections of physical networks, trust and security, are key elements of MLSysOps addressed using ML models. The framework architecture disassociates management from control and seamlessly interfaces with popular control frameworks for different layers of the continuum. The framework will be evaluated using research testbeds as well as two real-world application-specific testbeds in the domain of smart cities and smart agriculture, which will also be used to collect the system-level data necessary to train and validate the ML models, while realistic system simulators will be used to conduct scale-out experiments. The MLSysOps consortium is a balanced blend of academic/research and industry/SME partners, bringing together the necessary scientific and technological skills to ensure successful implementation and impact.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/computer and information sciences/internet/internet of things]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=Intend&amp;diff=213</id>
		<title>Intend</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=Intend&amp;diff=213"/>
		<updated>2025-10-22T12:02:30Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::Intend]]==&lt;br /&gt;
===[[EU Project full name::INtentify future Transport rEsearch NeeDs]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/769638]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
INTEND will deliver an elaborated study of the research needs and priorities in the transport sector utilising a systematic data collection method. Megatrends that will be affecting the future transport system will be identified using literature review. To ensure validity of the results,  the Analytical Network Process will be used to weight the megatrends and derive reliable outcomes on the most predominant trends. Finally, INTEND will develop a transport agenda that would pave the way to an innovative and competitive European Transport sector. The project is driven by three main objectives:-Define the transport research landscape-Define the Megatrends and their impact on research needs-Identify the main transport research needs and prioritiesTo enable a wide range of  stakeholders to gain access to the results, INTEND will develop an online platform, the INTEND Synopsis tool that will constitute a dynamic knowledge base repository on the major developments in the transport sector. This will provide a visualisation of the INTEND&#039;s main outcomes. The basis for the platform will be Transport Synopsis Tool which is already developed under the project RACE2050  coordinated by TUB. The repository will be updated and integrated into the INTEND website to provide a comprehensive picture of all forward looking studies focusing on technological developments, megatrends and policies. INTEND consortium represents a unique group of highly competent and experienced research teams, composed specifically for the purpose of the project. Their selection was based on the following criteria:1.Personnel and infrastructure capacity to adequately implement the project2.Established international relationships 3.Team working experience&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/engineering and technology/mechanical engineering/vehicle engineering/automotive engineering/autonomous vehicles]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon 2020]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=IntellIoT&amp;diff=212</id>
		<title>IntellIoT</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=IntellIoT&amp;diff=212"/>
		<updated>2025-10-22T12:02:21Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::IntellIoT]]==&lt;br /&gt;
===[[EU Project full name::Intelligent, distributed, human-centered and trustworthy IoT environments]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/957218]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
The traditional cloud centric IoT has clear limitations, e.g. unreliable connectivity, privacy concerns, or high round-trip times. IntellIoT overcomes these challenges in order to enable NG IoT applications. IntellIoT’s objectives aim at developing a framework for intelligent IoT environments that execute semi-autonomous IoT applications, which evolve by keeping the human-in-the-loop as an integral part of the system. Such intelligent IoT environments enable a suite of novel use cases. IntellIoT focuses on: Agriculture, where a tractor is semi-autonomously operated in conjunction with drones. Healthcare, where patients are monitored by sensors to receive advice and interventions from virtual advisors. Manufacturing, where highly automated plants are shared by multiple tenants who utilize machinery from third-party vendors. In all cases a human expert plays a key role in controlling and teaching the AI-enabled systems.The following 3 key features of IntellIoT’s approach are highly relevant for the work programme as they address the call’s challenges: (1) Human-defined autonomy is established through distributed AI running on intelligent IoT devices under resource-constraints, while users teach and refine the AI via tactile interaction (with AR/VR).(2) De-centralised, semi-autonomous IoT applications are enabled by self-aware agents of a hypermedia-based multi-agent system, defining a novel architecture for the NG IoT. It copes with interoperability by relying on W3C WoT standards and enabling automatic resolution of incompatibility constraints.(3) An efficient, reliable computation &amp;amp; communication infrastructure is powered by 5G and dynamically manages and optimizes the usage of network and compute resources in a closed loop. Integrated security assurance mechanisms provide trust and DLTs are made accessible under resource constraints to enable smart contracts and show transparency of performed actions.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/engineering and technology/electrical engineering, electronic engineering, information engineering/information engineering/telecommunications/telecommunications networks/mobile network/5G]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon 2020]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=ICOS&amp;diff=211</id>
		<title>ICOS</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=ICOS&amp;diff=211"/>
		<updated>2025-10-22T12:00:39Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::ICOS]]==&lt;br /&gt;
===[[EU Project full name::Towards a functional continuum operating system]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101070177]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
The unstoppable proliferation of novel computing and sensing device technologies, and the ever-growing demand for data-intensive applications in the edge and cloud, are driving a paradigm shift in computing around dynamic, intelligent and yet seamless interconnection of IoT, edge and cloud resources, in one single computing system to form a continuum. Many research initiatives have focused on deploying a sort of management plane intended to properly manage the continuum. Simultaneously, several solutions exist aimed at managing edge and cloud systems through not suitably addressing the whole continuum challenges though. The next step is, with no doubt, the design of an extended, open, secure, trustable, adaptable, technology agnostic and much more complete management strategy, covering the full continuum, i.e. IoT-to-edge-to-cloud, with a clear focus on the network connecting the whole stack, leveraging off-the-shell technologies (e.g., AI, data, etc.), but also open to accommodate novel services as technology progress goes on. The ICOS project aims at covering the set of challenges coming up when addressing this continuum paradigm, proposing an approach embedding a well-defined set of functionalities, ending up in the definition of an IoT2cloud Operating System (ICOS). Indeed, the main objective of the project ICOS is to design, develop and validate a meta operating system for a continuum, by addressing the challenges of: i) devices volatility and heterogeneity, continuum infrastructure virtualization and diverse network connectivity; ii) optimized and scalable service execution and performance, as well as resources consumptions, including power consumption; iii) guaranteed trust, security and privacy, and; iv) reduction of integration costs and effective mitigation of cloud provider lock-in effects, in a data-driven system built upon the principles of openness, adaptability, data sharing and a future edge market scenario for services and data.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/computer and information sciences/internet/internet of things]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=HYPER-AI&amp;diff=210</id>
		<title>HYPER-AI</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=HYPER-AI&amp;diff=210"/>
		<updated>2025-10-22T12:00:22Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::HYPER-AI&lt;br /&gt;
]]==&lt;br /&gt;
===[[EU Project full name::Hyper-Distributed Artificial Intelligence Platform for Network Resources Automation and Management Towards More Efficient Data Processing Applications]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101135982]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
In HYPER-AI, we work with smart virtual computing entities (nodes) that come from a variety of  infrastructures that span all three of the so-called computing continuum&#039;s layers: the Cloud, the Edge, and IoT.It focuses on intensive data-processing applications that present the potential to improve their footprint when hyper-distributed in an optimized manner. In order to give targeted applications access to computational, storage, or network services, HYPER-AI implements the idea of computing swarms as autonomous, self-organized, and opportunistic networks of smart nodes. These networks may offer a diverse and heterogeneous set of resources processing, storage, data, communication) at all levels and have the ability to dynamically connect, interact, and cooperate. HYPER-AI proposes semantic representation concepts to enable heterogeneous resources’ abstraction in a homogeneous way, under a common annotation (computing node), across the whole range of network infrastructures. The main orchestration and control concept of HYPER-AI is inspired by autonomic systems (self-CHOP principles) which employ swarmed computing schemes. Its objective is to make smart multi-node (swarm) deployment scenario design, execution, and monitoring easier, through appropriate AIs for self-configuration (nodes assigned resources), self-healing (swarmed nodes lifecycle), self-optimizing (exploiting built-in situation awareness mechanisms) and self-protecting (intrusion detection, privacy, security, encryption and identity management) at application runtime. In order to support dynamic and data-driven application workflows, HYPER-AI suggests the flexible integration of resources at the edge, the core cloud, and along the big data processing and communication channel, enabling their energy, time and cost-efficient execution. Finally, distributed ledger concepts for security, privacy, and encryption as well as AI-based intrusion detection are also considered.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/computer and information sciences/internet/internet of things]]&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Relevant IPCEI-CIS Reference Architecture components:&#039;&#039;&#039; ===&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Management]]&#039;&#039;&#039;,&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Logging]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Monitoring and Alerting]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Performance Management]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Fault Management]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Catalog/Repository Management]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Operation Automation]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Security and compliance]]&#039;&#039;&#039;&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Identity and Access Management]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Identity Management]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Key Management Service]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Audit Log]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Service Compliance Verification]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Sustainability]]&#039;&#039;&#039;,&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Application layer]]&#039;&#039;&#039;,&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Data layer]]&#039;&#039;&#039;,&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Data Pipelines]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Data Modelling]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Data Exposure]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Data Policy Control]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Data Federation]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::AI Layer]]&#039;&#039;&#039;&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Cloud-Edge Training]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Cloud-Edge Inference]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Federated Learning]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Service orchestration]]&#039;&#039;&#039;,&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Service Orchestrator]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Cloud Edge Platform]]&#039;&#039;&#039;&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Physical Infrastructure Manager]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Virtual Infrastructure Platform Manager]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Cloud Edge Access Control]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Cloud Edge Resource Repository]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Serverless Orchestrator (FaaS)]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Virtualization]]&#039;&#039;&#039;&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Virtual Infrastructure Manager (IaaS)]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Physical Cloud Edge Resources]]&#039;&#039;&#039;,&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Physical Network Resources]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=FLUIDOS&amp;diff=209</id>
		<title>FLUIDOS</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=FLUIDOS&amp;diff=209"/>
		<updated>2025-10-22T12:00:09Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::FLUIDOS]]==&lt;br /&gt;
===[[EU Project full name::Flexible, scaLable and secUre decentralIzeD Operating System]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101070473]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
FluiDOS leverages the enormous, unused processing capacity at the edge, scattered across heterogeneous edge devices that struggle to integrate with each other and to coherently form a seamless computing continuum. By way of a disruptive, open-source paradigm that hinges upon secure protocols for advertisement and discovery, AI-powered resource orchestration and intent-based service integration, FluiDOS will create a fluid, dynamic, scalable and trustable computing continuum that spans across devices, unifies edge and cloud in an energy-aware fashion, and possibly extends beyond administrative boundaries. Notwithstanding its innovation signature, FluiDOS will build upon consolidated Operating Systems and orchestration solutions like Kubernetes, on top of which it will provide a new, enriched layer enacting resource sharing through advertisement/agreement procedures (in the horizontal dimension), and hierarchical aggregation of nodes, inspired by Inter-domain routing in the Internet (in the vertical dimension). Intent-based orchestration will leverage advanced AI Algorithms to optimize costs and energy usage in the continuum, promoting efficient usage of edge resources. A Zero-Trust paradigm will allow FluiDOS to securely control and access geographically diverse resources, while Trusted Platform Modules will provide strong isolation and guarantee a safe deployment of applications and services. FluiDOS will pursue the above goals through the creation of an open, collaborative ecosystem, focused on the development of a multi-stakeholder market of edge services and applications, promoting European digital autonomy. The involvement of stakeholders is planned from the outset of the project through pilots and demonstrator in the fields of intelligent energy, agriculture and logistics, which will challenge FluiDOS capabilities to adapt to different environments and operating conditions, while showcasing its ground-breaking innovation potential.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/computer and information sciences/internet]]&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
	<entry>
		<id>https://wiki.nexusforum.martel-innovate.com/index.php?title=EDGELESS&amp;diff=208</id>
		<title>EDGELESS</title>
		<link rel="alternate" type="text/html" href="https://wiki.nexusforum.martel-innovate.com/index.php?title=EDGELESS&amp;diff=208"/>
		<updated>2025-10-22T11:59:56Z</updated>

		<summary type="html">&lt;p&gt;Admin: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;==[[EU Project short name::EDGELESS]]==&lt;br /&gt;
===[[EU Project full name::Cognitive edge-cloud with serverless computing]]===&lt;br /&gt;
&#039;&#039;&#039;Full project details (EU Research results portal):&#039;&#039;&#039; [[CORDIS URL::https://cordis.europa.eu/project/id/101092950]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Project description:&#039;&#039;&#039; ===&lt;br /&gt;
EDGELESS is set to efficiently operate serverless computing in extremely diverse computing environments from resource-constrained edge devices to highly-virtualised cloud platforms. By taking advantage of AI/ML solutions, it will enable automatic deployment and reconfiguration to fully exploit compute resources available on clusters of nearby edge nodes. EDGELESS will define novel orchestration systems that provide a flexible horizontally scalable compute solution able to fully use heterogeneous edge resources, while preserving vertical integration with the cloud and the benefits of serverless, including its application programming model. It will address edge systems at design stage, particularly targeting low-latency, high-reliability applications with computationally-intensive tasks, requiring specialised hardware or a trusted environment. This ambitious challenge will be met via distributed computing solutions to partition the edge environment in clusters, each managed as a local decentralised serverless platform. In each cluster, orchestration and scheduling of jobs will run smoothly thanks to real-time monitoring of short-term load/network/energy conditions and anticipatory AI-powered algorithms to manage lightweight virtualised lambda executors, e.g., unikernels. Environmental sustainability will be boosted by dynamically concentrating resources physically (e.g., by temporarily switching off far-edge devices) or logically (e.g., by dispatching tasks towards a specific set of nodes), at the expense of performance-tolerant applications. Clusters will cooperate with each other and with all the layers in the edge-cloud continuum to compose complex applications on-demand through a FaaS paradigm. EDGELESS innovations will be validated through testbeds (near-edge MEC and two small-device lab setups), integrated through a federated edge-cloud infrastructure, and three pilots: Autonomous Smart City Surveillance, Internet of Robotic Things, and HealthCare Assistants.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;EuroVoc IDs:&#039;&#039;&#039; [[EuroVoc ID::/natural sciences/computer and information sciences/internet]]&lt;br /&gt;
&#039;&#039;&#039;EU Programme:&#039;&#039;&#039;&lt;br /&gt;
[[Programme::Horizon Europe]]&lt;br /&gt;
&lt;br /&gt;
=== &#039;&#039;&#039;Relevant IPCEI-CIS Reference Architecture components:&#039;&#039;&#039; ===&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Management]]&#039;&#039;&#039;,&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Monitoring and Alerting]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Performance Management]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Operation Automation]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Security and compliance]]&#039;&#039;&#039;,&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Sustainability]]&#039;&#039;&#039;,&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Benchmark, Metrics and Monitoring]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Application layer]]&#039;&#039;&#039;,&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Application Designer]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Application Packager]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::API Gateway]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Data layer]]&#039;&#039;&#039;&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Data Pipelines]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Data Modelling]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::AI Layer]]&#039;&#039;&#039;,&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Cloud-Edge Training]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Cloud-Edge Inference]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Service orchestration]]&#039;&#039;&#039;,&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Service Orchestrator]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Application Performance Management]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Application Repository]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Cloud Edge Platform]]&#039;&#039;&#039;,&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Cloud Edge Connectivity Manager]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Serverless Orchestrator (FaaS)]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Virtualization]]&#039;&#039;&#039;,&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Virtual Infrastructure Manager (IaaS)]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Physical Cloud Edge Resources]]&#039;&#039;&#039;,&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Compute]],&lt;br /&gt;
[[IPCEI-CIS Reference Architecture::Storage]],&lt;br /&gt;
&#039;&#039;&#039;[[IPCEI-CIS Reference Architecture high level::Physical Network Resources]]&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
[[ItemType::EU Project]]&lt;/div&gt;</summary>
		<author><name>Admin</name></author>
	</entry>
</feed>