Committed to Secure and Efficient application packaging and deployment in Cloud and Edge environments.
Nubificus is a specialized research group focused on tackling complex virtualization and cloud-edge computing challenges. The team regularly collaborates with leading academic institutions and R&D organizations on European projects, driving innovation in hardware acceleration, serverless frameworks, and resource orchestration for next-generation distributed infrastructures.
Beyond our core expertise, the team is actively pushing boundaries in other cutting-edge domains, including IoT applications and AI workload optimization, bringing lightweight, high-performance execution to every layer of the modern computing stack.
From foundational research to real-world impact
Unikernels, lightweight VMs
GPUs, FPGAs, NPUs
OCI, CRI, Kubernetes
Attestation, TEE, DICE
Hardware acceleration for cloud & edge workloads
Unikernel container runtime for cloud-native workflows
Zero-trust device onboarding & lifecycle management
Real-time ML at the edge
Smart factory & fleet ops
Sub-ms cold starts
Distributed city-scale systems
Beyond virtualization — delivering impact across the modern computing stack
End-to-end ML pipeline optimization — from model training to inference at the edge. We accelerate AI workloads across heterogeneous hardware using vAccel, enabling real-time inference on GPUs, TPUs, and NPUs without vendor lock-in.
Secure, scalable IoT orchestration from sensor to cloud. Our edgeLink platform provides zero-trust device onboarding, OTA lifecycle management, and cloud-native workflows for thousands of heterogeneous endpoints.
Bridging centralized cloud infrastructure with distributed edge nodes through lightweight runtimes, unikernels, and container-native tooling — enabling seamless workload portability across every tier.
Hardware-rooted trust with confidential computing, DICE/EAT attestation, and reproducible software supply chains. We embed security at every layer — from silicon to orchestration.
Sub-millisecond cold starts, minimal memory footprints, and bare-metal-grade throughput. We engineer systems where every microsecond counts — from kernel bypass to hardware-aware scheduling.
Active contributors to the cloud-native ecosystem. Our work on OCI-compatible runtimes, Kubernetes integrations, and open acceleration APIs ensures interoperability and community-driven innovation.
Driving innovation through EU-funded collaborative research