Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

...

Status
colour

...

Green
title

...

Reviewed


No Format
Disclaimer: All of this information is in place by ProcessMaker in its cloud clients and works well for that kind of configuration. If ever passed to on premises clients ProcessMaker will not and can not guarantee that will have the same result. Use under your own risk. ProcessMaker can not be blamed for any issue that might happened during the implementation of these guidelines.


1. Best Practices

  • Regularly backup your Processmaker Process maker (PM) files from your server to save the configuration as a full backup to be used in a case of disaster o data lose.

  • Deploy critical components of the application across multiple Availability Zones, and replicate data appropriately.

  • Monitor and respond to events.

  • Ensure that we are prepared to handle fail over.

  • For a basic solution you can manually copy the backups to a new server with the latest backup.

  • Regularly test the process of recovering your data and verify if they fail.

2. Workflow

Life Cycle

AWS Life Cycle

Image Modified

Graphic 1

Processmaker Backup Diagram

AWS Backup Diagram.pngImage Modified

                    

                  

  Graphic 2

Using s3cmd

Using this method we recommend to use the following command line:

Code Block
languagebash
titles3cmd
s3cmd --config /root/.s3cfg sync /backups/PM.tar.gz s3://bucket/server/


For more information about AWS S3 go to https://aws.amazon.com/s3/

Mounting AWS S3

Using this method you can rsync your files directly into your S3 storage, we recommend to use this command line:

Code Block
languagebash
titlersync
rsync -avz /backups/backup.tar.gz
 
  /mnt/backups/servers/


Or you can tarball the files directly into the mount point, we recommend to use this command line:

Code Block
languagebash
titletar
tar -czvf /mnt/backups/servers/backup.tar.gz /opt/processmaker


Archive to the Glacier Storage Class 7 Days after the object's creation date.

S3 to Glacier.pngImage Modified

Graphic 3

Amazon Glacier is a secure, durable, cloud storage service for data archiving and long-term backup.

This task is done automatically once it is configured in the life cycle.


3. Retention period

  • In S3 the retention period will be 7 days.

  • After 7 days files a transferred to Glacier storage where the retention period is 30 days.

  • After the 30 days the file will be deleted from storage.

...

  • From S3 Storage the recovery time depends on the size of the file (object), in this matter the recovery would be immediately.

  • From Glacier Storage it takes from 3 to 5 hours no matter the size of the file.

5. Database Backup

In this matter we can have two options:

  • EC2 instance as the MySql server

  • RDS instance

EC2 instance as the MySql server:

  • MySqlDump, we recommend to use the following command:



Code Block
languagebash
titleMySqldump
TIMESTAMP=$(date +"%F%T")

BACKUP_DIR="/backupMysqlServer/$TIMESTAMP"

MYSQL_USER="user"

MYSQL=/usr/bin/mysql

MYSQL_PASSWORD="password"

HOST1="localhost"

MYSQLDUMP=/usr/bin/mysqldump

mkdir -p "$BACKUP_DIR/mysql"

$MYSQLDUMP --force --opt --verbose --lock-tables --user=$MYSQL_USER -h$HOST1 -p$MYSQL_PASSWORD --databases $db | gzip > "$BACKUP_DIR/mysql/$db.gz"



With this method to get a consistent backup the database must get blocked, this may probably a blockiness in the application To read more about how mysqldump works go to:

https://dev.mysql.com/doc/refman/5.5/en/mysqldump.html

  • Percona Xtrabackup: provides a non-blocking online, real-time backup, Fast and reliable backups

    • Uninterrupted transaction processing during backups

    • Savings on disk space and network bandwidth with better compression

    • Automatic backup verification

    • Higher uptime due to faster restore time

For this we recommend the following command:


Code Block
languagebash
titlePercona Xtrabackup
innobackupex --user=bkpuser
 
  --password=bkppassword /data/backups

innobackupex --apply-log /data/backups/*



For more information about Percona Xtrabackup go to:

https://www.percona.com/software/mysql-database/percona-xtrabackup

6. Retention period

For the Retention period in both cases using MysqlDump or percona xtrabackup, is the same as we mentioned in point 3. (Retention Period)

...

When using MySqlDump the command to restore will be:


Code Block
languagebash
titleMysqlDump Restore
gunzip [backupfile.sql.gz]

mysql -u [uname] -p[pass] [db_to_restore] < [backupfile.sql]


When using Percona use the following command:


Code Block
languagebash
titlePercona Xtrabackup Restore
innobackupex --copy-back /data/backups/new_backup



8. AWS RDS database Server

When our clients opt for using AWS RDS as their database Server, the backups follows the below points:

  • Amazon RDS creates and saves automated backups of your DB instance. Amazon RDS creates a storage volume snapshot of your DB instance, backing up the entire DB instance and not just individual databases.

  • Amazon RDS creates automated backups of your DB instance during the backup window of your DB instance. Amazon RDS saves the automated backups of your DB instance according to the backup retention period that you specify. If necessary, you can recover your database to any point in time during the backup retention period.