Published on

Cloud computing terminologies you should know


If you are starting out in cloud computing, you will be overwhelmed with tons of cloud computing terminologies. This might affect your quick understanding of some of the cloud computing concepts because you will always have to look up some of the terminologies before you proceed.

Let's take a look at some of these terminologies and their definitions.


An access control list (ACL) is a table that tells a computer operating system which access rights each user has to a particular system object, such as a file directory or individual file. Each object has a security attribute that identifies its access control list.


A container is a standard unit of software that packages up code and all its dependencies so that the application runs quickly and reliably from one computing environment to another. Available for both Linux and Windows-based applications, containerized software will always run the same, regardless of the infrastructure.


A cloud instance refers to a virtual server instance from a public or private cloud network. In cloud instance computing, single hardware is implemented into software and run on top of multiple computers. A server in the cloud can be easily moved from one physical machine to another without going down.

In simple words, a server running our application is called an instance. Think of one server as one instance. An Instance is a virtual machine that runs our workloads in the cloud. Often the term VM Virtual Machine & Instance is used interchangeably.

The term server and instance can also be used interchangeably but in the cloud computing world, the term instance is always preferred over the server.


Platform as a service (PaaS) or application platform as a service (PaaS) or platform-based service is a category of cloud computing services that provides a platform allowing customers to develop, run, and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app


Infrastructure-as-a-Service commonly referred to as simply IaaS is a form of cloud computing that delivers fundamental compute, network, and storage resources to consumers on-demand, over the internet, and on a pay-as-you-go basis. IaaS enables end-users to scale and shrink resources on an as-needed basis, reducing the need for high, up-front capital expenditures or unnecessary owned infrastructure, especially in the case of spiky workloads. In contrast to PaaS and SaaS (even newer computing models like containers and serverless), IaaS provides the lowest-level control of resources in the cloud.


A third-party offering of Artificial Intelligence (AI) outsourcing enables organizations and individuals to experiment with language, vision, and speech understanding capabilities without a huge initial investment and with lower risk. AI typically involves a vast range of algorithms that allow computers to solve complex tasks by generalizing over data. AI as a service allows users to upload data, run complex models in the cloud, and receive results via a cloud platform or API, significantly reducing development time while improving the time-to-value.


Autoscaling also referred to as autoscaling, auto-scaling, and sometimes automatic scaling is a cloud computing technique for dynamically allocating computational resources. Depending on the load to a server farm or pool, the number of active servers will typically vary automatically as user needs fluctuate.


An API is a set of definitions and protocols for building and integrating application software. API stands for application programming interface. APIs let your product or service communicate with other products and services without having to know how they’re implemented. This can simplify app development, saving time, and money. When you’re designing new tools and products—or managing existing ones—APIs give you flexibility; simplify design, administration, and use; and provide opportunities for innovation.


Availability zones are data center locations isolated from each other as a safeguard against unexpected outages leading to downtime. The zones are typically geographically distinct. Businesses can opt to have one or multiple availability zones globally depending on their specific needs.

BACKEND AS A SERVICE (BaaS/mobile backend/mBaaS)

Backend as a service is a model for providing web app and mobile app developers with a way to link their applications to backend cloud storage and APIs exposed by back end applications while also providing features such as user management, push notifications, and integration with social networking services.

These services are provided via the use of custom software development kits (SDKs) and application programming interfaces (APIs). BaaS is a relatively recent development in cloud computing, with most BaaS startups dating from 2011 or later. Although a fairly nascent industry, trends indicate that these services are gaining mainstream traction with enterprise consumers.


A growing trend towards the use of employee-owned to connect to organizational networks and access work-related systems and potentially confidential or sensitive data. It is part of a larger IT consumerization trend in which consumer hardware and software are being brought in the enterprise. BYOD can occur under the radar in the form of shadow IT; however, more and more organizations are leaning towards the implementation of BYOD policies. More specific variations of the term include bring your own apps (BYOA), bring your own laptop (BYOL), and bring your own apps (BYOA).


A Cloud Architect is an IT specialist who develops a company’s computing strategy. This strategy incorporates cloud adoption plans, cloud application design as well as cloud management and monitoring. Additional responsibilities include support for application architecture and deployment in cloud environments. The Architect also assists with cloud environments such as the public cloud, private cloud, and hybrid cloud. This professional draws on solid knowledge of the company’s cloud architecture and platform when designing and developing dynamic cloud solutions.


An IT professional responsible for any duties related to cloud computing, including planning, design, management, and maintenance. They may also be tasked with assessing an organization’s infrastructure and migrating different functions to a stable and reliable cloud-based system. The demand for cloud engineers is on the rise, as more companies move crucial business applications and processes to private, public, and hybrid cloud infrastructures.


The process of distributing computing resources and workloads across several application servers that are running in a cloud environment. Like other forms of load balancing, cloud load balancing allows you to maximize application reliability and performance; but at a lower cost and easier scaling to match demand, without loss of service. This helps ensure users have access to the applications they need, when they need them, without any problems.


Cloud migration is the process of moving data, applications, or other business elements to a cloud computing environment. There are various types of cloud migrations an enterprise can perform. One common model is the transfer of data and applications from a local, on-premises data center to the public cloud.


Cloud-native is a term used to describe container-based environments. Cloud-native technologies are used to develop applications built with services packaged in containers, deployed as microservices, and managed on elastic infrastructure through agile DevOps processes and continuous delivery workflows.


A cloud provider is a company that delivers cloud computing-based services and solutions to businesses and/or individuals. This service organization may provide rented and provider-managed virtual hardware, software, infrastructure, and other related services.


Cloud provisioning refers to the deployment and integration of an organization’s cloud computing services within its infrastructure. The cloud services in question can be hybrid, public, or private solutions. An organization may choose to host some services and applications within the public cloud while others remain on site behind the firewall.


refers to a group of linked servers and other resources that work together as if they were a single system. Clustering is a popular strategy for implementing parallel processing and high availability through fault tolerance and load balancing.


A central place for your team to manage container images, perform vulnerability analysis, and decide on who has access to what with fine-tuned access controls. During the CI/CD process, developers should have access to all the container images required for an application. Hosting all the container images in a single instance enables users to identify, commit, and pull images when they need to.

Content Delivery Network: ( Content Distribution Network/CDN )

A content delivery network (CDN) refers to a geographically distributed group of servers that work together to provide fast delivery of Internet content. A CDN allows for the quick transfer of assets needed for loading Internet content including HTML pages, javascript files, stylesheets, images, and videos.


Customer relationship management (CRM) is one of many different approaches that allow a company to manage and analyze its interactions with its past, current, and potential customers. It uses data analysis about customers' history with a company to improve business relationships with customers, specifically focusing on customer retention and ultimately driving sales growth.


Data centers are simply centralized locations where computing and networking equipment is concentrated for the purpose of collecting, storing, processing, distributing, or allowing access to large amounts of data. They have existed in one form or another since the advent of computers.


Data loss prevention (DLP) is a set of tools and processes used to ensure that sensitive data is not lost, misused, or accessed by unauthorized users. DLP software classifies regulated, confidential, and business-critical data and identifies violations of policies defined by organizations or within a predefined policy pack, typically driven by regulatory compliance such as HIPAA, PCI-DSS, or GDPR.


Data migration is the process of selecting, preparing, extracting, and transforming data and permanently transferring it from one computer storage system to another. Additionally, the validation of migrated data for completeness and the decommissioning of legacy data storage are considered part of the entire data migration process.


Is the area of security planning that deals with protecting an organization from the effects of major disasters that destroys part or all of its resources; including data records, IT equipment, and the organization’s physical space. A data recovery plan maps the quickest and most effective way work can be resumed after a disaster.


Desktops-as-a-Service or simply DaaS securely delivers virtual apps and desktops from the cloud to any device or location. This desktop virtualization solution provisions secure saas and legacy applications as well as full Windows-based virtual desktops and deliver them to your workforce. DaaS offers a simple and predictable pay-as-a-go subscription model, making it easy to scale up or down on-demand. This turnkey service is easy to manage, simplifying many of the IT admin tasks of desktop solutions


DevOps is a set of practices that works to automate and integrate the processes between software development and IT teams, so they can build, test, and release software faster and more reliably. The term DevOps was formed by combining the words “development” and “operations” and signifies a cultural shift that bridges the gap between development and operation teams, which historically functioned in siloes.


Distributed Computing is a computing concept in which the components of a software system, such as applications and data are distributed among multiple computers to improve performance and efficiency. Distributed computing relies on network services and interoperability standards that specify how they communicate with each other.


Data integrity is the overall accuracy, completeness, and consistency of data. Data integrity also refers to the safety of data in regards to regulatory compliance — such as GDPR compliance — and security. It is maintained by a collection of processes, rules, and standards implemented during the design phase. When the integrity of data is secure, the information stored in a database will remain complete, accurate, and reliable no matter how long it’s stored or how often it’s accessed. Data integrity also ensures that your data is safe from any outside forces.


Is an application that controls the Domain Name System (DNS) records and server clusters enabling domain name owners to control them easily. DNS management software greatly reduces human error when editing repetitive and complex DNS data. Using DNS management, one website can command the services of multiple servers.


In cloud computing, elasticity is defined as "the degree to which a system can adapt to workload changes by provisioning and de-provisioning resources in an autonomic manner, such that at each point in time the available resources match the current demand as closely as possible”.


ELASTIC COMPUTING is a cloud computing concept where a cloud service provider distributes flexible resources that can either be scaled up or down according to user preferences and needs. It may also refer to the ability to provide flexible resources that can be expanded or resized when needed. Some of the affected resources include bandwidth, storage, and processing power.


Encryption is the method by which information is converted into a secret code that hides the information's true meaning. The science of encrypting and decrypting information is called cryptography. In computing, unencrypted data is also known as plaintext, and encrypted data is called ciphertext


This refers to the ability of a computer system or component to continue working without loss of service in the event of an unexpected error or problem. Fault tolerance can be achieved with embedded hardware, or software, or a combination of both.


This is a system in which multiple databases seemingly function as one entity; however, each component database in the system exists independently of the others and is completely functional and self-sustained. When the federated database receives an application query, the system figures out which of its component databases contain that data being requested and passes the request to it. A federated database is a viable solution to database search issues.


This is the computer exclusively responsible for the central storage and management of files generated or required by other computers in a client/server model. In an enterprise setting, a file server enables end-users to share information over the network without having to physically transfer files using external storage such as flash disks.


Cloud Firewalls are software-based, cloud-deployed network devices, built to stop or mitigate unwanted access to private networks. As a new technology, they are designed for modern business needs and sit within online application environments.


High availability is a quality of computing infrastructure that is important for mission-critical systems. High availability permits the computing infrastructure to continue functioning, even when certain components fail.


A hybrid cloud is a computing environment that combines a public cloud and a private cloud by allowing data and applications to be shared between them. When computing and processing demand fluctuates, hybrid cloud computing gives businesses the ability to seamlessly scale their on-premises infrastructure up to the public cloud to handle any overflow—without giving third-party datacenters access to the entirety of their data.


A hypervisor (or virtual machine monitor, VMM, virtualizer) is computer software, firmware, or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine.


Infrastructure is a fundamental cloud service alongside Software as a Service and Platform as a Service, which encompasses the provision of virtualized computing resources that are remotely accessed through the internet. The resources are deployed and managed by cloud service providers.


IoT refers to an ever-growing network of physical objects provided with unique identifiers (IP address) and the ability to transfer data over a network without any human interference. IoT extends beyond computers, smartphones, and tablets to a diverse range of devices that utilize technology to interact and communicate with the environment in an intelligent fashion, via the internet.


An integrated development environment (IDE) is a software application that provides comprehensive facilities to computer programmers for software development. An IDE normally consists of at least a source code editor, build automation tools, and a debugger.


An image is a complete backup of your server including all volumes. Images are primarily meant for boot disk creation. They optimized for multiple downloads of the same data over and over. If the same image is downloaded many times, after the first download the following downloads are going to be very fast (even for large images).


Input/output operations per second (IOPS, pronounced eye-ops) is an input/output performance measurement used to characterize computer storage devices like hard disk drives (HDD), solid state drives (SSD), and storage area networks (SAN).


This is a networking solution for distributing computing workloads across multiple resources, like servers. By distributing the workload, a load balance ensures no server will become the single point of failure. When a single server goes offline, the load balancer simply redirects all incoming traffic to the other available servers. Load balancing can be implemented with software, hardware, or a combination of both.


Cloud service latency is the delay between a client request and a cloud service provider's response. Latency greatly affects how usable and enjoyable devices and communications are. Those problems can be magnified for cloud service communications, which can be especially prone to latency for several reasons.


This refers to a software architecture in which a single instance of the software runs on a server and serves multiple tenants. Systems designed in such a manner are often called shared (in contrast to dedicated or isolated). A tenant is a group of users who share common access with specific privileges to the software instance. With a multitenant architecture, a software application is designed to provide every tenant a dedicated share of the instance - including its data, configuration, user management, tenant individual functionality, and non-functional properties.


On-demand computing is a business computing model in which computing resources are made available to the user on an “as needed” basis. Rather than all at once, on-demand computing allows cloud hosting companies to provide their clients with access to computing resources as they become necessary.


Cloud routes define the paths that network traffic takes from a virtual machine (VM) instance to other destinations. These destinations can be inside your Google Cloud Virtual Private Cloud (VPC) network (for example, in another VM) or outside it


Data redundancy is a condition created within a database or data storage technology in which the same piece of data is held in two separate places. This can mean two different fields within a single database or two different spots in multiple software environments or platforms. Whenever data is repeated, it constitutes data redundancy.


Scalability refers to a system’s ability to maintain full functionality despite a change in size or volume. Unlike elasticity that meets short-term, tactical needs, cloud scalability supports long-term, strategic needs. A scalable application should be able to efficiently function when expanded or shifted to a bigger operating system. Cloud scalability enables businesses to meet the expected demand for services without the need for large up-front infrastructure investments.


A service-level agreement (SLA) is a commitment between a service provider and a client. Particular aspects of the service – quality, availability, responsibilities – are agreed upon between the service provider and the service user.

SNAPSHOTS Snapshots are incremental backups, which means that only the blocks on the device that have changed after your most recent snapshot are saved. Snapshots are primarily targeting backup and disaster recovery scenarios, they are cheaper, easier to create (can often be uploaded without stopping the VM). They are meant for frequent regular upload and rare downloads.


Throughput is the measure of how fast a given application can transfer data. While throughput is certainly based in part on, and limited by, bandwidth, the two rates aren't the same, even though both are typically measured using bits per second.


A virtual private cloud (VPC) is a secure, isolated private cloud hosted within a public cloud. VPC customers can run code, store data, host websites, and do anything else they could do in an ordinary private cloud, but the private cloud is hosted remotely by a public cloud provider. (Not all private clouds are hosted in this fashion.) VPCs combines the scalability and convenience of public cloud computing with the data isolation of private cloud computing.


Virtualization is to create a virtual version of a device or resource like an operating system, network, or storage device. Virtualization is making head road in three major areas of IT, server virtualization, storage virtualization, and server virtualization. The core advantage of virtualization is that it enables the management of workloads by significantly transforming traditional computing to make it more scalable. Virtualization allows CSPs to provide server resources as a utility rather than a single product.


A virtual machine (VM) is an image file managed by the hypervisor that exhibits the behavior of a separate computer, capable of performing tasks such as running applications and programs like a separate computer. In other words, a VM is a software application that performs most functions of a physical computer, actually behaving like a separate computer system.

A virtual machine, usually known as a guest, is created within another computing environment referred to as a "host." Multiple virtual machines can exist within a single host at one time.