Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
A virtual private cloud (VPC) is a secure, isolated private cloud hosted within a public cloud. VPC customers can run code, store data, host websites, and do anything else they could do in an ordinary private cloud, but the private cloud is hosted remotely by a public cloud provider. (Not all private clouds are hosted in this fashion.) VPCs combine the scalability and convenience of public cloud computing with the data isolation of private cloud computing.
A subnet is a range of IP addresses within a network that are reserved so that they're not available to everyone within the network, essentially dividing part of the network for private use. In a VPC these are private IP addresses that are not accessible via the public Internet, unlike typical IP addresses, which are publicly visible.
Load balancing distributes server loads across multiple resources — most often across multiple servers. The technique aims to reduce response time, increase throughput, and in general speed things up for each end user.
Modern high‑traffic websites must serve hundreds of thousands, if not millions, of concurrent requests from users or clients and return the correct text, images, video, or application data, all in a fast and reliable manner. To cost‑effectively scale to meet these high volumes, modern computing best practice generally requires adding more servers.
A load balancer acts as the “traffic cop” sitting in front of your servers and routing client requests across all servers capable of fulfilling those requests in a manner that maximizes speed and capacity utilization and ensures that no one server is overworked, which could degrade performance. If a single server goes down, the load balancer redirects traffic to the remaining online servers. When a new server is added to the server group, the load balancer automatically starts to send requests to it.
Loads are broken up based on a set of predefined metrics, such as by geographical location, or by the number of concurrent site visitors.
Members of a certain group — such as ‘people living in Europe’, for example, may be directed to a server within Europe, while members of another group take, for instance, ‘North Americans’ may be directed to another server, closer to them.
In this manner, a load balancer performs the following functions:
Distributes client requests or network load efficiently across multiple servers
Ensures high availability and reliability by sending requests only to servers that are online
Provides the flexibility to add or subtract servers as demand dictates
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource. An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting.
AWS WAF is a web application firewall that helps protect your web applications or APIs against common web exploits and bots that may affect availability, compromise security, or consume excessive resources.
SSL termination is a process by which SSL-encrypted data traffic is decrypted (or offloaded). Servers with a secure socket layer (SSL) connection can simultaneously handle many connections or sessions. An SSL connection sends encrypted data between an end-user’s computer and web server by using a certificate for authentication. SSL termination helps speed the decryption process and reduces the processing burden on backend servers.
DNS is the technology that translates human-adapted, text-based domain names to machine-adapted, numerical-based IP. When users type domain names into the URL bar in their browser, DNS servers are responsible for translating those domain names to numeric IP addresses, leading them to the correct website.
Any Internet-connected computer can be reached through a public IP address, either of the below
IPv4 address (e.g. 173.194.121.32
)
IPv6 address (e.g.2027:0da8:8b73:0000:0000:8a2e:0370:1337
).
Computers can handle such addresses easily, but people have a hard time finding out who’s running the server or what service the website offers. IP addresses are hard to remember and might change over time.
So, In the following image, we see the basic operation of the DNS server.
First, the computer looks at its local DNS cache, which stores information that the computer has recently retrieved.
If the computer does not retrieve DNS details from your local computer, it needs to perform a DNS query outside from Recursive DNS servers which have their own cache,
if Recursive DNS servers don’t know the answer they query Root nameservers which look at the first part of domain example.com and direct query to TLD
Then, we have to go all the way back to the root name servers.
Then we ask the COM top-level domain (TLD) nameservers that handle all the traffic for sites ending in .com
From here, the .com name servers identify what name servers example.com is responsible for, If TLD nameservers don’t have the information we need it directs the query to an authoritative nameserver which knows all information about specific domain which are stored in DNS records, then it retrieves “A record”.
The following are the most common DNS server types that are used to resolve hostnames into IP addresses.
A DNS resolver (recursive resolver), is designed to receive DNS queries, which include a human-readable hostname such as “www.example.com”, and is responsible for tracking the IP address for that hostname.
The root server is the first step in the journey from hostname to IP address. The DNS Root Server extracts the Top Level Domain (TLD) from the user’s query — for example, www.example.com —... provides details for the .com TLD Name Server. In turn, that server will provide details for domains with the .com DNS zone, including “example.com”.
Higher-level servers in the DNS hierarchy define which DNS server is the “authoritative” name server for a specific hostname, meaning that it holds the up-to-date information for that hostname.
The Authoritative Name Server is the last stop in the name server query—it takes the hostname and returns the correct IP address to the DNS Resolver (or if it cannot find the domain, returns the message NXDOMAIN).
DNS servers create a DNS record to provide important information about a domain or hostname, particularly its current IP address. The most common DNS record types are:
Address Mapping record (A Record)—also known as a DNS host record, stores a hostname and its corresponding IPv4 address.
IP Version 6 Address record (AAAA Record)—stores a hostname and its corresponding IPv6 address.
Canonical Name record (CNAME Record)—can be used to alias a hostname to another hostname. When a DNS client requests a record that contains a CNAME, which points to another hostname, the DNS resolution process is repeated with the new hostname.
Mail exchanger record (MX Record)—specifies an SMTP email server for the domain, used to route outgoing emails to an email server.
Name Server records (NS Record)—specifies that a DNS Zone, such as “example.com” is delegated to a specific Authoritative Name Server, and provides the address of the name server.
Reverse-lookup Pointer records (PTR Record)—allows a DNS resolver to provide an IP address and receive a hostname (reverse DNS lookup).
Certificate record (CERT Record)—stores encryption certificates—PKIX, SPKI, PGP, and so on.
Service Location (SRV Record)—a service location record, like MX but for other communication protocols.
Text Record (TXT Record)—typically carries machine-readable data such as opportunistic encryption, sender policy framework, DKIM, DMARC, etc.
Start of Authority (SOA Record)—this record appears at the beginning of a DNS zone file, and indicates the Authoritative Name Server for the current DNS zone, contact details for the domain administrator, domain serial number, and information on how frequently DNS information for this zone should be refreshed.
Now that we’ve covered the major types of traditional DNS infrastructure, you should know that DNS can be more than just the “plumbing” of the Internet. Advanced DNS solutions can help do some amazing things, including:
A NAT gateway is a Network Address Translation (NAT) service. You can use a NAT gateway so that instances in a private subnet can connect to services outside your VPC but external services cannot initiate a connection with those instances.
When you create a NAT gateway, you specify one of the following connectivity types:
Public – (Default) Instances in private subnets can connect to the internet through a public NAT gateway, but cannot receive unsolicited inbound connections from the internet. You create a public NAT gateway in a public subnet and must associate an elastic IP address with the NAT gateway at creation. You route traffic from the NAT gateway to the internet gateway for the VPC. Alternatively, you can use a public NAT gateway to connect to other VPCs or your on-premises network. In this case, you route traffic from the NAT gateway through a transit gateway or a virtual private gateway.
Private – Instances in private subnets can connect to other VPCs or your on-premises network through a private NAT gateway. You can route traffic from the NAT gateway through a transit gateway or a virtual private gateway. You cannot associate an elastic IP address with a private NAT gateway. You can attach an internet gateway to a VPC with a private NAT gateway, but if you route traffic from the private NAT gateway to the internet gateway, the internet gateway drops the traffic.
Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.
Amazon EKS managed node groups create and manage Amazon EC2 instances for you.
Every managed node is provisioned as part of an Amazon EC2 Auto Scaling group that's managed for you by Amazon EKS. Moreover, every resource including Amazon EC2 instances and Auto Scaling groups run within your AWS account.
The Auto Scaling group of a managed node group spans every subnet that you specify when you create the group.
Amazon EKS tags managed node group resources so that they are configured to use the Kubernetes .
An Ingress may be configured to give Services externally-reachable URLs, load balance traffic, terminate SSL / TLS, and offer name-based virtual hosting. An is responsible for fulfilling the Ingress, usually with a load balancer, though it may also configure your edge router or additional frontends to help handle the traffic.
Finally, the authoritative servers for example.com respond with the appropriate IP address for
There are , indicated by the letters A through M, operated by organisations like the Internet Systems Consortium, Verisign, ICANN, the University of Maryland, and the U.S. Army Research Lab.
: fast routing of connections between globally distributed data centres
: routing users to the CDN that will provide the best experience
: identifying the physical location of each user and ensuring they are routed to the nearest possible resource
: moving traffic in a controlled manner from on-premise resources to cloud resources
: reducing network congestion and ensuring traffic flows to the appropriate resource in an optimal manner
These capabilities are made possible by next-generation DNS servers that are able to intelligently route and filter traffic. Learn more about NS1’s and take DNS to the next level.
Amazon Relational Database Service (RDS) is a managed SQL database service provided by Amazon Web Services (AWS). Amazon RDS supports an array of database engines to store and organize data. It also helps with relational database management tasks, such as data migration, backup, recovery and patching.
Amazon RDS facilitates the deployment and maintenance of relational databases in the cloud. A cloud administrator uses Amazon RDS to set up, operate, manage and scale a relational instance of a cloud database. Amazon RDS is not itself a database; it is a service used to manage relational databases.
Amazon provides several instance types with different combinations of resources, such as CPU, memory, storage options and networking capacity. Each type comes in a variety of sizes to suit the needs of different workloads.
Amazon Simple Storage Service (S3) is a massively scalable storage service based on object storage technology. It provides a very high level of durability, with high availability and high performance. Data can be accessed from anywhere via the Internet, through the Amazon Console and the powerful S3 API.
S3 storage provides the following key features:
Buckets—data is stored in buckets. Each bucket can store an unlimited amount of unstructured data.
Elastic scalability—S3 has no storage limit. Individual objects can be up to 5TB in size.
Flexible data structure—each object is identified using a unique key, and you can use metadata to flexibly organize data.
Downloading data—easily share data with anyone inside or outside your organization and enable them to download data over the Internet.
Permissions—assign permissions at the bucket or object level to ensure only authorized users can access data.
AWS Elastic Block Store (EBS) is Amazon’s block-level storage solution used with the EC2 cloud service to store persistent data. This means that the data is kept on the AWS EBS servers even when the EC2 instances are shut down. EBS offers the same high availability and low-latency performance within the selected availability zone, allowing users to scale storage capacity at low subscription-based pricing model. The data volumes can be dynamically attached, detached and scaled with any EC2 instance, just like a physical block storage drive. As a highly dependable cloud service, the EBS offering guarantees 99.999% availability.
An internet gateway is a service that allows for internet traffic to actually enter into a VPC. Otherwise, a VPC is completely segmented off and then the only way to get to it is potentially through a VPN connection rather than through an internet connection.
An Internet Gateway is a logical connection between an AWS VPC and the Internet. It is not a physical device. Each VPC has only one Internet Gateway. If a VPC doesn’t have an Internet Gateway, then the resources cannot be accessed from the Internet. Conversely, resources within your VPC need an Internet Gateway to access the Internet.
Amazon Elastic Kubernetes Service (Amazon EKS) is a fully-managed, certified Kubernetes conformant service that simplifies the process of building, securing, operating, and maintaining Kubernetes clusters on AWS. Amazon EKS integrates with core AWS services such as CloudWatch, Auto Scaling Groups, and IAM to provide a seamless experience for monitoring, scaling and load balancing your containerized applications.
A beginner’s guide to OpenTelemetry
OpenTelemetry is a collection of tools, APIs, and SDKs.
Instrument the application using OpenTelemetry SDK.
Traces are sent to a (collector) agent.
The collector exposes ports 4317(gRPC) and/or 4318(http) using OTLP.
Traces are then exported (in this case to Jaeger) and stores the traces on the backend
The Jaeger Query retrieves the traces and expose them to the Jaeger UI.
Jaeger UI provides a web based user interface that can be used to analyse traces.
OpenTelemetry gives us the tools to create trace data. It provides a vendor agnostic standard for observability as it aims to standardise the generation of traces.
This is good, because that means that we are not tied to any tool (or vendor). Not only can we use any programming language we want, but we can also pick and choose the storage backend, thus avoiding a potential buy in from commercial vendors.
It also means that developers can instrument their application without having to know where the data will be stored.
Defines how OpenTelemetry is used.
Defines the specific implementation of the API for a language.
As you can see from the image (at the top), in order to get trace data, we first need to instrument the application. To collect the trace data, we can use the OpenTelemetry SDK.
The trace data can be generated using either automatic or manual (or a mix) instrumentation.
To instrument your application with OpenTelemetry, go to the OpenTelemetry repository, and pick the language for your application and follow the instructions.
One of the best ways to instrument applications is to use OpenTelemetry automatic instrumentation (auto-instrumentation). This approach is simple, easy, and doesn’t require many code changes.
Using auto-instrumentation libraries, means that you don’t need to write code for the trace information to be collected. In fact, OpenTelemetry offers an API and SDK that allows for easy bootstrapping
of distributed tracing into your software.
This is good to know if don’t have the necessary knowledge (or time) to create a tracing framework tailored for your application.
On OpenTelemetry Registry you can search for libraries, plugins, integrations, and other useful tools for extending OpenTelemetry.
For example, running the following command will automatically instrument your python code.
opentelemetry-instrument python app.py
When you use auto instrumentation, a predefined sets off spans will be created for you and populated with relevant attributes.
Manual instrumentation is when you write specific code for your application. It’s the process of adding observability code to your application. This can more effectively suit your needs. For example, you can add attributes and events.
Once you have collected trace data, you need to send it somewhere.
OpenTelemetry Protocol (OTLP) specification describes the encoding, transport, and delivery mechanism of telemetry data between telemetry sources, intermediate nodes such as collectors and telemetry backends. source
The OTLP protocol describes how to encode and transmit telemetry data, which makes it a natural choice for data transport. Each language SDK provides an OTLP exporter you can configure to export data over OTLP. The OpenTelemetry SDK then transforms events into OTLP data.
The data from your instrumented application can be sent to an OpenTelemetry collector.
The collector is a component of OpenTelemetry that collects trace data (spans, metrics, logs etc) process (pre-processes data), and exports the data (sends it off to a backend that you want to talk to).
The OpenTelemetry collector can receive telemetry data in multiple formats.
The collector can be setup as an agent or as a gateway.
We usually first send traces to a (collector) agent. This (collector) agent handles the trace data from the instrumented application.
The (collector) agent can offload responsibilities that the client instrumentation otherwise need to handle. This includes batching, retry, encryption, compression and more.
You can also perform sampling here depending on the amount of traces/traffic you want to send. (ie. take only 10% of the traces).
You need to configure receivers (how data gets into the collector), which then transform the data (process) before sending it to one or more backends using exporters.
Here we configure a receiver (on the collector) that accepts OTLP data on port 4317 (gRPC). (You also need to configure your application to export over OTLP to use it)
In our OpenTelemetry on Kubernetes article, we will talk about this in more detail and also show how you can deploy this on a local Kubernetes cluster.
Once you’ve instrumented your code, you need to get the data out in order to do anything useful with it. OpenTelemetry comes with a variety of exporters.
The exporters converts OpenTelemetry protocol (OTLP) formatted data to their respective predefined back-end format and exports this data to be interpreted by the back-end or system.
For example, for metrics, there is a Prometheus exporter that allows sending metrics so that Prometheus can consume it.
A common scenario (especially during testing), is to export all data directly to Jaeger using the jaeger-exporter.
In our OpenTelemetry on Kubernetes article, we will talk about this in more detail and also show how you can deploy this on a local Kubernetes cluster.
You can read more about exporters here
It’s important to know that the OpenTelemetry collector does not provides their own backend.
The storage backend can be Jaeger, Zipkin, Elastic, Cassandra, Tempo, Splunk, a vendor (Honeycomb, Lightstep, Signoz, Logz.io, Aspecto etc..).
For a full list of storage alternatives, please checkout the Awesome OpenTelemetry repository.
This OpenTelemetry repo provides a complete demo on how you can deploy OpenTelemetry on Kubernetes.
Please check this article on how to deploy this on a local Kubernetes cluster.
Checkout Awesome-OpenTelemetry to quickly get started with OpenTelemetry. This repo contains a big list of helpful resources.
To get free and Automatic SSL certificates using Cert manager and Let’s Encrypt
In today's scenario, SSL certificates are the most important part of Deploying an application to the Internet. It is only the most important attributes that determine whether your websites are safe or not.
This paddle lock symbol conveys to your customer that the website they are visiting is safe, secured, and verified. So how do you actually achieve HTTPS on your website?
HTTP + SSL = HTTPS.
The HTTP protocol developed in the early 1990s has become an integral part of our daily life. Today, we cannot live a single day without it. However, it does not even provide a basic level of security when exchanging information between the user and the web server. That is when HTTPS (“S” means “secure” here) comes to the rescue. In HTTPS, the exchanged data is encrypted using SSL/TLS — that family of protocols has proven itself well in protecting privacy and data integrity and is being actively promoted by the industry.
Before being able to request certificates, you must create CA resources: Issuer
or ClusterIssuer
. They are used for signing CSRs (certificate requests). The difference between these resources is that ClusterIssuer
is non-namespaced and can be used in multiple namespaces:
Issuer
is used within a single namespace only;
ClusterIssuer
is a global cluster object.
Let’s start with the simplest case — requesting a self-signed certificate. It is quite common. For example, you can use it in your K8s cluster for testing environments dynamically created for developers needs. It can be also useful in the case of the external load balancer that terminates SSL traffic.
The Issuer
resource would look like this:
To issue a certificate, you have to define the Certificate
resource that determines the issuance (see the issuerRef
section below) and the location of the private key (the secretName
field). Then you need to invoke that key in the Ingress (note the tls
section in the spec
field):
The certificate will be issued in a few seconds after these resources are added to the cluster. You can see the confirmation in the output of the command:
Looking at the secret resource itself, you will see:
the tls.key
private key,
the ca.crt
root certificate,
and the self-signed tls.crt
certificate.
You can browse the contents of these files using the openssl
utility:
To avoid this, specify the path to the secret file containing ca.crt
in the Certificate
resource. For example, you can use the corporate CA. This way, you will be able to sign certificates issued for your Ingress with a key that is already in use by other server services/information systems.
So, let’s describe the resources:
Note that we use the staging server in acme
’s server
field for our Issuer
. You can replace it with the production one later.
Let’s apply this configuration and trace the entire process of obtaining the certificate:
1. The creation of a Certificate
leads to the emergence of a new CertificateRequest
resource:
2. In its description there is a notification about an Order
creation:
3. The Order
contains the description of parameters of the validation and its current status. The validation is performed by the Challenge
resource:
4. And finally, there is information about the status of the validation itself in resource details:
The certificate will be issued in less than a minute if all the prerequisites are met: the domain is accessible from the outer world, the rate limits on the LE side are respected, and so on. If the issuance was successful, you should see the message “Certificate issued successfully” in the output of the describe certificate le-tls
command.
Now you can safely change the ACME server address to the production one (https://acme-v02.api.letsencrypt.org/directory
) and re-issue valid certificates signed by Let's Encrypt Authority X3
instead of Fake LE Intermediate X1
.
But first, you have to delete the Certificate
resource. Otherwise, the issuance procedure will not start because the certificate already exists, and it is valid. Deleting a secret would immediately result in invalidation of the certificate with the following message in the output of the describe certificate
command:
Now it is time to apply the production manifest for the Issuer
with the Certificate
described above (it has not changed):
After receiving the Certificate issued successfully
confirmation, let us check it out:
To make one step further we will issue a certificate for all subdomains of the site using another method of validation — the DNS one. We will use CloudFlare as our DNS provider to change the domain records we need.
First, let’s create a token to use the CloudFlare API:
1. Profile → API Tokens → Create Token.
2. Set access rights as follows:
Permissions:
→ Zone — DNS — Edit
→ Zone — Zone — Read
Zone Resources:
→ Include — All Zones
3. Copy the token generated (for example, y_JNkgQwkroIsflbbYqYmBooyspN6BskXZpsiH4M
).
Create a Secret
resource containing the token and describe it in your Issuer
:
(Do not forget to use a staging environment for testing!)
It is time to go through the domain ownership confirmation procedure:
The TXT record will appear at your DNS dashboard:
… and after a while, the status will change to:
Let’s make sure that the certificate is valid for all subdomains:
The validation over DNS is usually slow since most DNS providers have a so-called “propagation time” — a period showing how long it takes for an updated DNS record to become available on all DNS servers of the provider.
The ACME standard also supports a combination of both types of validation. You can use it to speed up obtaining a certificate for the main domain. In this case, the description of the Issuer
will have the following form:
If you apply this configuration, two Challenge
resources will be created:
In this case, the availability of Issuer is enough to get the work done, meaning that we have to create a lesser number of entities.
We have learned how to obtain auto-renewable, self-signed, and free SSL certificates from Let’s Encrypt for website domains that are managed by Ingresses in the Kubernetes clusters.
Getting an SSL certificate is not that easy. Moreover, it is expensive too. In today's world where Kubernetes is Predominantly everywhere. And with tonnes of Ingress resources in Kubernetes, it becomes really hard to get such a huge number of certificates, monitor them, and rotate them every time. This would be a nightmare for the DevOps Engineers. What if I tell you that there a tool that could get you free SSL certificates and rotate them automatically when they expire? Here comes . Cert-manager was created by , and a lot of the development is still sponsored by them. As per the cert-manager’s official guide, the cert-manager is a native certificate management controller. It can help with issuing certificates from a variety of sources, such as , , , a simple signing key pair, or self-signed. It will ensure certificates are valid and up to date, and attempt to renew certificates at a configured time before expiry.
For example, Google for “HTTPS everywhere” since 2014. When prioritizing search results, it takes into account whether sites use secure, encrypted connections. All this propaganda affects ordinary users as well: modern browsers warn their users about insecure connections and invalid SSL certificates.
A certificate for a personal website might cost tens of dollars. However, buying it is not always justified. Fortunately, since late 2015, there is a free alternative in the form of (LE) certificates. This nonprofit authority was created by Mozilla enthusiasts to make internet-wide hassle-free encryption a reality.
The certificate authority issues certificates (the most basic ones available on the market) valid for 90 days, and it is also possible to obtain a so-called wildcard certificate for several subdomains.
The algorithms described in the (ACME) protocol (designed specifically for Let’s Encrypt) are used to obtain a certificate. With it, the agent can prove control of the domain either by provisioning an HTTP resource (the so-called “HTTP-01 challenge”) or DNS records (“DNS-01 challenge”) — more information about them you may find below.
is a Kubernetes-native certificate management controller consisting of a set of CustomResourceDefinitions (hence the on the minimum supported version of K8s, v1.12) for configuring CA (certificate authorities) and obtaining certificates. The installation of CRDs in a cluster is straightforward and to applying a single YAML file:
(You can also ).
It is worth noting that clients consuming certificates issued in such a way (i.e., using the self-signed issuer) will not trust them. The reason is simple: this issuer type does not have a CA (see the ).
As I mentioned before, there are available to prove the control of the domain, HTTP-01 and DNS-01.
The first approach (HTTP-01) deploying a tiny web server as a separate deployment. It will be serving some information at the http://<YOUR_DOMAIN>/.well-known/acme-challenge/<TOKEN>
URL per request of the certification server. Therefore, this method implies the accessibility of Ingress from the outer world via port 80 and the publicity of the domain’s DNS record.
The second challenge (DNS-01) if there is an API you can use to change the DNS records of your domain. The Issuer uses these tokens to create TXT records for your domain. Then, ACME server gets these records during confirmation. Let’s Encrypt easily integrates with various DNS providers, including CloudFlare, AWS Route53, Google CloudDNS, and others (as well as with the LE’s own DNS implementation, ).
Note: Let’s Encrypt imposes fairly on requests to ACME servers. To avoid unnecessary load on LE’s production environment, we recommend using the certificate for testing (the difference is in the ACME server only).
Besides creating certificates directly, you can use cert-manager’s component. It relieves you of the need to create Certificate
resources explicitly. The idea is to obtain a certificate automatically using the Issuer
specified in the special annotations of Ingress. Here is an example of the respective Ingress resource:
Also, there is an annotation — kubernetes.io/tls-acme: "true"
. The peculiarity about it is that when deploying cert-manager, the default Issuer
must be specified via Helm arguments (or by appending arguments of cert-manager’s deployment container).
We at do not use these approaches (mentioned in Method #4) and cannot recommend them due to their opacity (and various associated ). However, it’s good to mention them in the article to provide a fuller picture.
The article provides example solutions for the most common problems we face. However, cert-manager features are not limited to those described above. On the , you can find examples of using it with other services. For example, you can use as a certificate authority or set up .
PLEASE NOTE: our blog has MOVED to ! New articles from Flant’s engineers will be posted there only. Check it out and subscribe to keep updated!
This article has been written by our engineer .