Product docs and API reference are now on Akamai TechDocs.
Search product docs.
Search for “” in product docs.
Search API reference.
Search for “” in API reference.
Search Results
 results matching 
 results
No Results
Filters
Managed PostgreSQL Databases on Akamai Cloud with Terraform
Author: Peter Sari
Other contributors: Nathan Melehan
View edit history on GitHub → Originally authored by Peter SariTraducciones al EspañolEstamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
This guide demonstrates how to use Terraform to set up a PostgreSQL cluster with the Managed Database service on Akamai Cloud. Terraform is an infrastructure as code (IaC) tool that allows you to automate the deployment of cloud infrastructure. PostgreSQL is a widely-adopted, open source database solution used by many DevOps engineers and supported by a large range of operating systems.
Akamai’s Managed Database is a Relational Database Management System as a Service. Akamai manages both the underlying compute instances and the relational database management system software. Akamai also updates the software and maintains the health of these systems. Using Managed Databases, you can instantiate managed clusters of MySQL and PgSQL with a range of supported versions.
Managed Database clusters on Akamai Cloud also support multiple databases. This guide shows how to use Terraform to deploy individual databases on a cluster using two Terraform providers, where a modular configuration handles the database deployments.
Before You Begin
To follow this tutorial, perform these steps first:
Install Terraform on your workstation
Create a personal access token for the Linode API that has permission to create databases
Install a PostgreSQL client on your workstation. This is used to validate the installation of the database cluster.
The steps in this guide use the
psql
command in the example commands shown. Visit the PostgreSQL Downloads page for installation instructions.Note A list of other clients on wiki.postgresql.org is available, but the instructions in this guide are intended forpsql
.
Terraform Project File Structure
The project in this guide follows the directory structure shown below:
.
├── main.tf
├── modules
│ └── databases
│ ├── main.tf
│ ├── outputs.tf
│ └── variables.tf
├── outputs.tf
├── providers.tf
├── terraform.tfvars
└── variables.tf
In this structure, the root Terraform files (main.tf
, outputs.tf
, terraform.tfvars
, variables.tf
) build the database cluster infrastructure. The modules/databases/
module handles database creation within the cluster.
psql
to create tables within the module.Database Cluster Options
There are several options that can be configured when provisioning a new Managed Database cluster, including:
The size of the cluster you would like to deploy
The instance types that underpin the databases
The database software and supported version
The maintenance window schedule. Managed Databases require periodic maintenance by Akamai, and the window for this can be configured.
The region where the cluster should be located
In a later section, each of these choices is encoded in your Terraform variables file. This file references unique label/strings correspond to the region, instance type, etc that you decide on.
Regions
Document the region in which you would like to deploy. A list of regions that support Managed Databases is returned by this API call:
curl -s https://api.linode.com/v4/regions | jq '.data[] | select(.capabilities[] | contains("Managed Databases")) | .id'
The output resembles:
"ap-west"
"ca-central"
"ap-southeast"
"us-iad"
"us-ord"
"fr-par"
...
The ID is used as the value for the variable region in the Terraform configuration. See our Region Availability page for a full list of compute region IDs. Availability for Managed Database deployment may vary.
Instance Types
Akamai offers a range of different instance types, but a subset of these can be used with Managed Database Clusters. When deciding on an instance type, you must know what the CPU, memory and storage requirements are for your initial database deployments.
As a reference point, a single instance’s capacity is the storage capacity of your databases in aggregate. A list of instances compatible with Managed Databases is returned by this API call:
curl -s https://api.linode.com/v4/databases/types | jq '.data[] | { "id" , "label" , "disk" , "vcpus" }'
The output resembles:
{
"id": "g6-nanode-1",
"label": "DBaaS - Nanode 1GB",
"disk": 9216,
"vcpus": 1
}
{
"id": "g6-standard-1",
"label": "DBaaS - Linode 2GB",
"disk": 30720,
"vcpus": 1
}
{
"id": "g6-standard-2",
"label": "DBaaS - Linode 4GB",
"disk": 59392,
"vcpus": 2
}
...
This command outputs the id
, which is later referenced in db_instance_type
variable in your Terraform configuration, along with some information related to disk space (in MB) and vCPUs. The amount of RAM per instance is indicated in the description.
Relational Database Management System (RDBMS) Software and Version
When creating the cluster, the server software (PostgreSQL or MySQL at the time of this writing) and version need to be specified. For this guide, PostgreSQL v17 is used. The list of currently supported RDBMS with associated versions is returned by this API call:
curl -s https://api.linode.com/v4/databases/engines | jq '.data[].id'
"mysql/8"
"postgresql/13"
"postgresql/14"
"postgresql/15"
"postgresql/16"
"postgresql/17"
The value postgresql/17
is later referenced by the variable rdbms_ver
in the Terraform configuration.
Cluster Size
The size of the cluster (the number of database instances, or nodes, in the cluster) determines its read capacity and whether it remains available when a node fails. Clusters can be built as a single node, for smaller, less critical applications, or they can be provisioned with 2 or 3 nodes, for those that require high availability or higher read capacity.
Your cluster size can be changed at any time after the cluster is first provisioned. In the Terraform configuration demonstrated later, this is done by configuring the cluster_nodes
variable.
Maintenance Windows
Because the cluster nodes and RDBMS software are managed and kept up to date by Akamai, it is important to provide a viable maintenance window. This window is specified in the time zone of the cluster’s region.
In the Terraform configuration, the update_hour
and update_day
variables control this window. Both variables are integer values:
update_day
ranges from 1 (Sunday) to 7 (Saturday)update_hour
ranges from 0 (midnight) to 23 (11PM).
These variables can also be modified after the cluster is first created.
Configure Terraform Providers
A “provider” in Terraform maps the Hashicorp Configuration Language to an API, like the Linode API, so it can communicate with various software and cloud providers. In order to configure the database cluster with Terraform, you first need to declare which Terraform providers are used in the configuration and enter some required information.
On your workstation, create a directory named
postgres-terraform
. All Terraform configuration files in this guide are stored under this directory:mkdir postgres-terraform
Create a file in your
postgres-terraform/
directory namedproviders.tf
, and paste in the following snippet:- File: postgres-terraform/providers.tf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
terraform { required_providers { linode = { source = "linode/linode" version = "2.35.1" } postgresql = { source = "a0s/postgresql" version = "1.14.0-jumphost-1" } } } provider "linode" { token = var.linode_token } provider "postgresql" { database = "defaultdb" host = linode_database_postgresql_v2.pgsql-cluster-1.host_primary port = linode_database_postgresql_v2.pgsql-cluster-1.port username = linode_database_postgresql_v2.pgsql-cluster-1.root_username password = linode_database_postgresql_v2.pgsql-cluster-1.root_password sslmode = "require" }
The Terraform file above uses two providers:
The Linode provider (
linode/linode
) talks to the Linode API to build the infrastructure. At the time of writing, version 2.35.1 of the Linode provider is the latest.Latest Version Check the Terraform Registry for the latest versions prior to deploying. If using a different version, please be aware of proper version syntax.A PostgreSQL provider talks to the PostgreSQL management endpoint. There are multiple PostgreSQL providers, and this guide uses the
a0s/postgresql
provider.MySQL Providers MySQL providers are also available if you want to provision a MySQL cluster.
The providers need to be configured to work with your specific environment:
The Linode provider requires a personal access token (PAT) with read and write permissions for the Managed Databases service. This helps ensure proper user authentication to your Akamai Cloud account.
The PostgreSQL provider requires:
A collection of information that is derived from the cluster deployment: the username, password, hostname, and TCP port to use when connecting. These values are attributes of the
linode_database_postgresql_v2.pgsql-cluster-1
resource, defined later in a file calledmain.tf
.The name of the database you wish to connect to. This is assigned the value
defaultdb
. This entry is important as thepsql
interface requires an existing database in the connection string, even when creating a new database.defaultdb
is created for you during the creation of the database cluster.
Configure the Managed PostgreSQL Cluster with Terraform
Create a file named
main.tf
in yourpostgres-terraform/
directory and paste in the following Terraform code snippet:- File: postgres-terraform/main.tf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56
resource "linode_database_postgresql_v2" "pgsql-cluster-1" { label = var.db_clustername engine_id = var.rdbms_ver region = var.region type = var.db_instance_type allow_list = ["0.0.0.0/0"] cluster_size = var.cluster_nodes updates = { duration = 4 frequency = "weekly" hour_of_day = var.update_hour day_of_week = var.update_day } lifecycle { ignore_changes = [host_primary] } timeouts { create = "30m" update = "30m" delete = "30m" } } module "database1" { source = "./modules/databases" database = "defaultdb" db_host = linode_database_postgresql_v2.pgsql-cluster-1.host_primary db_port = linode_database_postgresql_v2.pgsql-cluster-1.port db_user = linode_database_postgresql_v2.pgsql-cluster-1.root_username db_password = linode_database_postgresql_v2.pgsql-cluster-1.root_password db_list = var.db_list1 cluster_id = linode_database_postgresql_v2.pgsql-cluster-1.id depends_on = [linode_database_postgresql_v2.pgsql-cluster-1] providers = { postgresql = postgresql } } module "database2" { source = "./modules/databases" database = "defaultdb" db_host = linode_database_postgresql_v2.pgsql-cluster-1.host_primary db_port = linode_database_postgresql_v2.pgsql-cluster-1.port db_user = linode_database_postgresql_v2.pgsql-cluster-1.root_username db_password = linode_database_postgresql_v2.pgsql-cluster-1.root_password db_list = var.db_list2 cluster_id = linode_database_postgresql_v2.pgsql-cluster-1.id depends_on = [linode_database_postgresql_v2.pgsql-cluster-1] providers = { postgresql = postgresql } } resource "local_file" "db_certificate" { filename = "${linode_database_postgresql_v2.pgsql-cluster-1.id}.crt" content = linode_database_postgresql_v2.pgsql-cluster-1.ca_cert }
This represents the primary logic of the root module in your Terraform configuration. The following Terraform resources and modules are declared:
A
linode_database_postgresql_v2
resource namedpgsql-cluster-1
. This is the Managed Database cluster that is provisioned.Several parameters, like
engine_id
, refer to variables that are later defined in thevariables.tf
andterraform.tfvars
files.The
allow_list
parameter defines which IP addresses can access the cluster. In this demonstration, all addresses are permitted, but you should restrict this for your cluster. Assign this to a list of address ranges in CIDR notation that include the applications that need to access the databases, as well as your Terraform management infrastructure (e.g. the workstation that you have Terraform installed on).The
lifecycle
andtimeouts
parameters are assigned values that are compatible with the Linode and database providers. Keep these same values for your deployments.The
database1
anddatabase2
module declarations both reference files in themodules/database/
directory, which are created in a later step. These represent two separate database resources within your cluster.Several parameters of these modules, like
db_host
, are assigned values provided by thepgsql-cluster-1
cluster resource, so they rely on the infrastructure being created prior to managing the database layer. Thedepends_on
parameter ensures that these database modules are invoked after the cluster resource is created. As before, thedefaultdb
database name is provided for the initial connection to the cluster.A
local_file
resource nameddb_certificate
is assigned theca_cert
value from the database cluster. This file is created in the root of the Terraform project, and it is used by your applications to securely connect to the database platform.
Create a file named
variables.tf
in yourpostgres-terraform/
directory and paste in the following snippet:- File: postgres-terraform/variables.tf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58
variable "linode_token" { description = "Linode API Personal Access Token" sensitive = true } variable "db_list1" { description = "Databases to exist on cluster. More than 1 DB can be specified here." type = list(string) default = ["database-1"] } variable "db_list2" { description = "Databases to exist on cluster. More than 1 DB can be specified here." type = list(string) default = ["database-2"] } variable "db_clustername" { description = "Label for Akamai Cloud system to ID Cluster. This must be unique in your environment. Must be between 3-32 chars, no spec chars except single hyphens" type = string default = "My-PgSQl-Cluster" } variable "rdbms_ver"{ description = "Type and Version of RDBMS. Pull the current supported list via API" type = string default = "postgresql/17" } variable "region" { description = "Region for DB Cluster" type = string default = "us-ord" } variable "db_instance_type" { description = "Linode type of DB Cluster Nodes - Storage, RAM and Compute of a single node equals rough DB capacity. Pull the current list from Linode API" type = string default = "g6-dedicated-4" } variable "cluster_nodes" { description = "Number of Database Cluster Nodes must equal 1, 2 or 3" type = number default = 2 } variable "update_hour" { description = "Hour to apply RDBMS updates midnight through 11pm = 0-23" type = number default = 22 } variable "update_day" { description = "Day to apply RDBMS updates Sun-Sat= 1-7" type = number default = 7 }
The variable definitions in this file are referenced in your resource and module declarations to build the database cluster and deploy two databases. The definitions include descriptions to show formatting and describe acceptable values, and a default value. The
rdbms_ver
,region
,db_instance_type
,cluster_nodes
,update_hour
, andupdate_day
variables correspond to the values chosen in the Preparing to Deploy section.Create a file named
outputs.tf
in yourpostgres-terraform/
directory and paste in the following snippet:- File: postgres-terraform/outputs.tf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42
output "database_password" { value = linode_database_postgresql_v2.pgsql-cluster-1.root_password sensitive = true description = "The password associated to the admin username" } output "database_username" { value = linode_database_postgresql_v2.pgsql-cluster-1.root_username sensitive = true description = "The admin username" } output "database_fqdn" { value = linode_database_postgresql_v2.pgsql-cluster-1.host_primary description = "The fqdn you can use to access the DB" } output "db_certificate" { value = linode_database_postgresql_v2.pgsql-cluster-1.ca_cert sensitive = true description = "The certificate used for DB Connections" } output "database_port" { value = linode_database_postgresql_v2.pgsql-cluster-1.port description = "The TCP Port used by the database" } output "database_id" { description = "The cluster ID used by Akamai" value = linode_database_postgresql_v2.pgsql-cluster-1.id } output "database1_created_databases" { value = module.database1.databases description = "List of databases created by database1 module" } output "database2_created_databases" { value = module.database2.databases description = "List of databases created by database2 module" }
This is a set of Terraform output definitions. Terraform outputs print their values to the command line when the Terraform configuration is applied. The values from these outputs are later used with the
psql
client to connect to the database cluster.Inside your
postgres-terraform/
directory, create amodules/
directory and adatabases/
subdirectory underneath it.mkdir -p postgres-terraform/modules/databases/
Create a file named
main.tf
in yourpostgres-terraform/modules/databases/
directory and paste in the following snippet:- File: postgres-terraform/modules/databases/main.tf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
terraform { required_providers { postgresql = { source = "a0s/postgresql" version = "1.14.0-jumphost-1" } } } resource "postgresql_database" "databases" { for_each = toset(var.db_list) name = each.value owner = var.db_user depends_on = [var.cluster_id] lifecycle { ignore_changes = [owner] } }
This is a child module of the root module. It represents a reusable set of instructions referenced by the root module to create databases in the cluster in a repeatable fashion.
This module uses the
postgresql_database
resource from thea0s/postgres
provider to create our Managed Databases.Create a file named
variables.tf
in yourpostgres-terraform/modules/databases/
directory and paste in the following snippet:- File: postgres-terraform/modules/databases/variables.tf
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
variable "db_host" { description = "host connection string" type = string } variable "db_port" { description = "cluster port" type = number } variable "db_user" { description = "admin cluster user" type = string sensitive = true } variable "db_password" { description = "admin user pass" type = string sensitive = true } variable "db_list" { description = "list of dbs" type = list(string) } variable "cluster_id" { description = "id of cluster" type = string } variable "database" { description = "db to connect to" type = string }
These variables are assigned values in the
module
declarations of themain.tf
file in the root module (step 1 of this section).Create a file named
outputs.tf
in yourpostgres-terraform/modules/databases/
directory and paste in the following snippet:- File: postgres-terraform/modules/databases/outputs.tf
1 2 3 4
output "databases" { value = keys(postgresql_database.databases) description = "List of created databases" }
The
databases
output is a list of the names of the databases created, and it is referenced in thedatabase1_created_databases
anddatabase2_created_databases
outputs of the root module.
Provision the Managed PostgreSQL Cluster with Terraform
The Terraform configuration is now complete and ready to be used to create infrastructure:
While inside the root
postgres-terraform/
project directory, run Terraform’sinit
command.cd postgres-terraform/ terraform init
This command downloads the Linode and PostgreSQL providers to the local execution environment and ensure you have everything in place to build your project:
Initializing the backend... Initializing modules... Initializing provider plugins... - Finding a0s/postgresql versions matching "1.14.0-jumphost-1"... - Finding linode/linode versions matching "2.35.1"... - Installing linode/linode v2.35.1... - Installed linode/linode v2.35.1 (signed by a HashiCorp partner, key ID F4E6BBD0EA4FE463) - Installing a0s/postgresql v1.14.0-jumphost-1... - Installed a0s/postgresql v1.14.0-jumphost-1 (self-signed, key ID 5A0BE9D2989FD2A2) Terraform has been successfully initialized!
Run Terraform’s
plan
command. This performs a test run of your configuration and ensures it is syntactically correct. This command also asks for your Linode API token, but it does not actually use it to create infrastructure:terraform plan
A summary of proposed changes is displayed, along with any warnings. The configuration from the previous section results in a warning, but this is expected behavior:
│ Warning: Redundant ignore_changes element │on main.tf line 1, in resource "linode_database_postgresql_v2" "pgsql-cluster-1": │1: resource "linode_database_postgresql_v2" "pgsql-cluster-1" { │ Adding an attribute name to ignore_changes tells Terraform to ignore future changes to the argument in configuration after the object has been created, retaining the value originally configured. │ The attribute host_primary is decided by the provider alone and therefore there can be no configured value to compare with. Including this attribute in ignore_changes has no effect. Remove the attribute from │ ignore_changes to quiet this warning.
In order to combine database management and infrastructure management in the configuration, lifecycle policies must be used to ignore changes to the
host_primary
attribute. The PostgreSQL provider generates a warning for this. Since this value doesn’t actually change, it is safe to ignore future changes to the value.If there are no errors (aside from the warning that was described), you can run Terraform’s
apply
command.terraform apply
You should be prompted by the command for your personal access token.
Terraform shows a summary of the proposed changes and asks if you would like to proceed. Enter
yes
for this prompt.The build process begins, and can take time to complete. Once complete, Terraform provides a summary of successful actions and a list of non-sensitive outputs, like below:
Plan: 4 to add, 0 to change, 0 to destroy. Changes to Outputs: database1_created_databases = ["database-1", ] database2_created_databases = ["database-2", ] database_fqdn = (known after apply) database_id = (known after apply) database_password = (sensitive value) database_port = (known after apply) database_username = (sensitive value) db_certificate = (sensitive value) ╷ │ Warning: Redundant ignore_changes element linode_database_postgresql_v2.pgsql-cluster-1: Creating... linode_database_postgresql_v2.pgsql-cluster-1: Still creating... [7m30s elapsed] linode_database_postgresql_v2.pgsql-cluster-1: Creation complete after 7m35s [id=258475] local_file.db_certificate: Creating... module.database1.postgresql_database.databases["database-1"]: Creating... module.database2.postgresql_database.databases["database-2"]: Creating... local_file.db_certificate: Creation complete after 0s [id=9ff519506c470ac8707472757a0880365d7bde03] module.database2.postgresql_database.databases["database-2"]: Creation complete after 1s [id=database-2] module.database1.postgresql_database.databases["database-1"]: Creation complete after 1s [id=database-1] Apply complete! Resources: 4 added, 0 changed, 0 destroyed. Outputs: database1_created_databases = [ "database-1", ] database2_created_databases = [ "database-2", ] database_fqdn = "a258475-akamai-prod-3474339-default.g2a.akamaidb.net" database_id = "258475" database_password = <sensitive> database_port = 26010 database_username = <sensitive> db_certificate = <sensitive>
Connect to the Managed PostgreSQL Cluster with psql
You should now have a fully functional PostgresSQL cluster running on Akamai Cloud’s Managed Database service with two databases. You can now test access to the cluster using the psql
command:
Gather the information needed for the connection from Terraform’s outputs. This information was displayed after the
terraform apply
command completed, but you can also run Terraform’soutput
command to retrieve it:terraform output
The output from this command hides sensitive information by default, like the
database_username
anddatabase_password
outputs. Use the-raw
flag to reveal the password:terraform output -raw database_password
Save this password as it is used later by the
psql
command.Run the following commands to create environment variables in your terminal for your database FQDN, port, and username:
psqlhost=$(terraform output -raw database_fqdn) psqlport=$(terraform output -raw database_port) psqluser=$(terraform output -raw database_username)
Use the
psql
command to connect to your new cluster and verify the databases exist:psql -h$psqlhost -ddefaultdb -U$psqluser -p$psqlport
You are prompted for your password. Copy and paste the output from the previous
terraform output -raw database_password
command at this prompt. A successful connection displays output like the following:psql (14.17 (Ubuntu 14.17-0ubuntu0.22.04.1), server 17.4) WARNING: psql major version 14, server major version 17. Some psql features might not work. SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off) Type "help" for help. defaultdb=>
At the database prompt, enter the
\l
command:Database promptdefaultdb=> \l
The two databases from the Terraform configuration,
database-1
anddatabase-2
, are listed in the output:List of databases Name | Owner | Encoding | Collate | Ctype | Access privileges ------------+----------+----------+-------------+-------------+----------------------- _aiven | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =T/postgres + database-1 | akmadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | database-2 | akmadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | defaultdb | akmadmin | UTF8 | en_US.UTF-8 | en_US.UTF-8 | template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres + | | | | | postgres=CTc/postgres (6 rows)
Enter the
\q
command to exit the database prompt:Database promptdefaultdb=> \q
Destroy the Managed PostgreSQL Cluster
If you would like to destroy the database cluster and the underlying compute instances, run Terraform’s destroy
command on your workstation while inside your postgres-terraform/
directory:
terraform destroy
Once destroyed, new costs stop accruing for the cloud infrastructure that was previously provisioned.
More Information
You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.
This page was originally published on