Demystifying Cloud Native

Tech Byte

Demystifying Cloud Native

Staying up to date with the latest trends in the ever-changing technology-landscape is both challenging and crucial. Cloud Native Computing has emerged as a paradigm shift, transforming the way applications are built, deployed, and scaled.

In a recent presentation by Alexandru Dejanu, Sr. Systems Engineer & SRE at Systematic, he embarked on the ambitious task of unravelling the complexities of cloud native in just 30 minutes. Despite the time constraints, Alex managed to provide valuable insights and demystify some of the key concepts in the cloud native landscape. Based on his presentation, this Tech Byte will give you an insight into the cloud native landscape and shed light on key concepts and the complexities surrounding it.

By Alexandru Dejanu, Sr. Systems Engineer & SRE

Introducing and Contextualising Cloud Native Computing

Founded in 2015, The Cloud Native Computing Foundation (CNCF) is a Linux Foundation Project that sustains an ecosystem of open-source, vendor-neutral projects (with different maturity levels sandbox, incubating, graduated), and has the mission to make cloud native computing ubiquitous, but what does it mean more precisely?

The essence of cloud native is that the technologies in the solution stack are interconnected in a loosely, but coupled manner, and more or less “plug-and-play” (just look at how many flavours of container runtimes or monitoring implementations are in CNCF).

Snapshot showing (a part of) the CNCF Landscape

A widespread misconception is that one must use a particular technology to be cloud native e.g., Kubernetes, but Nomad or Docker Swarm are also CNCF projects. A solution can be cloud native even in an on-prem environment (aka. private cloud), therefore, if needed, the transition to hybrid or public cloud should be seamless.


Embracing Scalability and Flexibility

One of the fundamental tenets of Cloud Native Computing is scalability, which refers to an application’s ability to handle varying workloads by dynamically scaling resources up or down. Consider, for instance, a healthcare platform that handles the analysis of patient data for medical research. The workload on the platform varies based on the influx of data, which can be influenced by factors like ongoing clinical trials or new patient records. In this scenario, imagine a sudden influx of patient data due to initiating a large-scale clinical trial. 

Cloud native solutions seamlessly accommodate the increased workload by dynamically scaling resources, ensuring efficient and timely processing of the growing volume of medical data.  This elasticity is achieved through containerisation, where healthcare applications and their dependencies are encapsulated in lightweight containers, facilitating rapid deployment and scaling as needed.


VMs vs. Containers

Containers and Virtual Machines (VMs) are both cornerstone technologies for cloud native solutions; although they operate at different levels of abstraction and serve distinct purposes, they are very much intertwined.

Most materials concerning containers start by presenting containers as a “lightweight” approach towards packaging an application and all its dependencies. Thus, being an alternative to Virtual Machines that doesn’t require the overhead of associating another operating system for your application.

In the “real world”, containers are a companion to virtualisation; they are running in VMs or some form of VM isolation because bare metal containers lack the elasticity/scalability needed for most solutions.

VMs vs. Containers

Security in Cloud Native Environments

Security is paramount in any technological landscape, and cloud native is no exception.

This entails subjects like container security, secure deployment practices, and the role of service meshes in enhancing security. Practices like up-to-date CVE vulnerability scans, distroless images, and SBOMs (Software Bill of Materials) should be an integral part of the development lifecycle.

As organisations transition to cloud native, understanding and addressing potential security challenges needs to be an ongoing process since the landscape is highly dynamic, and new vulnerabilities are discovered over time.

The Developer’s Perspective: Looking to the Future of the Evolving Cloud Native Practices

The dynamic nature of the cloud native landscape calls for constant learning and adaptation. As technology continues to evolve, staying informed and embracing change are essential for navigating the shifting terrain of Cloud Native Computing. Here, the concept of platform engineering and practices like GitOps mark crucial aspects of the evolving landscape within Cloud Native Computing.

Platform engineering is emerging as a pivotal trend as a response to the growing complexity of modern cloud native architectures. It’s a paradigm shift of the organisation's strategy, and it involves creating and maintaining a platform that serves as a flexible and supported abstraction layer, connecting developers with the underlying technologies of their applications.

Platforms built on Kubernetes offer a consistent and scalable environment, simplifying the deployment and scaling of diverse workloads. By creating a robust platform layer, organisations can foster collaboration, maintain consistency across applications, and accelerate the development life cycle, not to say that by extending Kubernetes API, organisations can have opinionated resources allowing the building of comprehensive platforms on top of Kubernetes.

A methodology that is gaining traction in the Cloud Native landscape is GitOps, which fully facilitates and streamlines deployment workflows. 

Application definitions, configurations, and environments should be declarative and version-controlled, and tools like Argo or Flux provide comprehensive support to achieve this: There’s no need to run kubectl because all changes are synced automatically.

In conclusion, given the dynamic and transformative nature of Cloud Native Computing, organisations need to embrace change so that they can position themselves to build resilient, scalable, and future-ready infrastructure that can seamlessly adapt to the evolving demands of the digital era.


About the author

Alexandru Dejanu is a Senior Systems Engineer & SRE at Systematic dedicated to the Customer Operations team, where he helps both development and operation teams to have full visibility of the complete application lifecycle. With various industry experience, Alex considers himself an opinionated and tech-agnostic “Jack of all trades” who loves helping others and sharing knowledge, be it on StackOverflow, Medium, at a conference or with his colleagues.

Systematic Tech Bytes

This is part of Systematic Tech Bytes, a series of tech blogs where our dedicated colleagues will share bits (and bytes) of their expertise, insights, and experiences in the ever-evolving world of technology. From the latest industry trends to insider tips, innovation highlights to problem-solving strategies, our team opens the door to their knowledge treasury, decoding the details of tech, one byte at a time.