Farmhouse Networking virtualizes small office servers reducing hardware footprint while improving uptime and disaster recovery for Oregon SMBs.
Had a client recently who had a smaller office network (they have up to 6 concurrent users) with a server to process orders from their website using a software called StoneEdge. This software is SQL-based database with a Microsoft Access front-end. It was time to upgrade their server to new hardware and cost was definitely an issue based on their size and order volume. In this case we chose to use a Synology device to act as the virtual machine host and create a new virtual server on the host. Here are some details:
4x Seagate 4TB IronWolf Pro 7200 rpm SATA III 3.5″ Internal NAS HDD
Picked the Synology based on the expandability of RAM to 32 GB, the capability to use cache drives, and storage growth over time. It also has a 4-core 2.2 Ghz Ryzen processor which was plenty based on their old servers CPU usage.
Basics of Setup
Assembled parts
Installed Synology OS
Setup HDDs in Synology SHR2 RAID
Added M.2 drives as cache
Installed the Virtual Machine Manager app
Created a VM with the max CPU available and max Memory available
Uploaded the ISO for the server OS
Installed the server OS
Setup the StoneEdge application
Migrated data
This build did not increase performance dramatically, but it did allow them to spend about half the cost of a full server to accomplish the same purpose. It also reduced costs by using the Synology for backup of the server locally and into S3 storage in the cloud for redundancy.
If your company is looking to move their servers to a virtual environment or into the cloud, then contact usto start the process
Amazon AWS S3 Storage to the rescue again with native support in WebHost Manager (WHM) to backup directly to S3. WHM AWS S3 Glacier backup is easy to setup and works like a charm. Included in this post are the standard settings that I use for all WHM backups.
WHM AWS S3 Glacier Backup Setup
Login to WHM and run a quick search for backup at the top left of the navigation pane
Click on “Backup Configuration” to begin setup
Under “Backup Status” choose “Enabled”, under “Backup Type” choose “Compressed” to save on bandwidth and leave the timeouts at their defaults
Under “Scheduling and Retention” choose to do a backup each day of the week and keep them for 30 days
Under “Files” choose to disable “Backup Suspended Accounts”, choose to enable “Backup Access Logs”, choose to enable “Backup Bandwidth Data”, choose to disable “Use Local DNS” and put a check next to “Backup System Files” to allow for full restores if needed
Under “Databases” choose both “Per Account and Entire MySQL Directory”, under “Default Backup Directory” type in:
/backup
Leave the “Retain backups in default directory” unchecked and disable “Mount Backup Drive as Needed”
Under “Additional Destinations” choose Amazon S3 from the drop down list and click on the “Create new destination button”
Assuming there is already a bucket and user created in AWS for this purpose – give the destination a name, check the box next to “Transfer System Backups to Destination”, type in the name of a folder in the bucket under “Folder” that will be created and used, type in the name of the “Bucket”, type in the “Access Key ID” and “Secret Access Key” then change the timeout to 60 seconds
Click on the “Save and Validate Destination” button to make sure all settings are correct
Finally click on the “Save Configuration” at the bottom to complete the WHM AWS S3 Glacier Backup setup.
Your WHM AWS S3 Glacier Backup has now been setup and with proper setup of S3 Storage there will be automatic archival too. If your company is using Amazon Web Services or S3 Storage for backup and need help getting it setup properly, then contact us for assistance.
I am growing to love Amazon Web Services (AWS) S3 Storage. The AWS S3 Bucket Lifecycle feature is great for those using S3 buckets for backups. The lifecycle can be applied to the whole bucket or individual folders based on rules then set for auto archival or deletion based on those same rules. This post assumes that the bucket to be used for backups has already been created. For help setting up a new bucket read the AWS Documentation for a walk through of this simple process.
Configuring AWS S3 Bucket Lifecycle Archiving
Log into the AWS console and click into the S3 console
Click on the magnifying glass next to the S3 bucket being used for backup
On the right side of the screen, scroll down the list of properties to the Lifecycle item and expand it
Click on the “Add Rule” next to the green plus symbol
For backups, it is best to choose the whole bucket option in the “Apply this Rule to:” section of the wizard then click “Configure Rule” button
For the “Actions on Object” section choose the “Archive to the Glacier Storage Class” option with the “Days after object’s creation date” set to 7 days to automatically archive to S3 Glacier after a week. Choose the “Permanently Delete” option with the “Days after object’s creation date” set to 120 days to automatically delete from S3 Glacier after 4 months. Click on the “Review” button to continue.
Give the rule a name and click on the “Create and Activate Rule” button to finish the rule creation process
Make sure to click on the “Save” button to have the newly created rules saved.
Your AWS S3 Bucket Lifecycle has now been setup for automatic archival. If your company is using Amazon Web Services or S3 Storage for backup and need help getting it setup properly, then contact us for assistance. Look for the next post that will show how to setup WHM for backup to S3 Storage.
Working with Amazon Web Service (AWS) S3 Storage requires a user account, so it is natural to create a specific AWS S3 Glacier backup user account when connecting an outside service or device for backup purposes. This post will detail how to create a user that has full rights to a specific bucket on S3 and Glacier Storage so that backup files can be created or edited from an outside source.
Now it is time to connect the backup service or device using the AWS S3 Glacier Backup user with the “Access Key ID” and “Secret Access Key” recorded earlier. If your company is using Amazon Web Services or S3 Storage for backup and need help getting it setup properly, then contact us for assistance. Look for the next couple posts that will show how to setup a S3 bucket for automatic archival and how to setup WHM for backup to S3 Storage.
Had a web hosting client who wanted to find the offending large files that were costing them, so decided to check Amazon S3 bucket size and individual file size. Found out that the best way to do this was via the AWS Command Line interface, so downloaded it from here and used the instructions here to add the Access Key ID, Secret Access Key and Default Region to the AWS Command Line for secure connection.
Check Amazon S3 Bucket Size
Open the standard Command Line then change directory by typing the following:
cd %ProgramFiles%\Amazon\AWSCLI\
Type in the following to check file sizes on a whole bucket:
aws s3 ls –summarize –human-readable –recursive s3://[bucket-name]/
Or type in the following to check file sizes on a particular directory:
aws s3 ls –summarize –human-readable –recursive s3://[bucket-name]/[directory]
This was a huge help to finding the large files and directories with a simple Check Amazon S3 Bucket Size command in the AWS Command Line. If your company is using Amazon AWS S3 Storage for cloud storage or need help getting the cloud storage setup, then contact us for assistance.
The most basic task needed to get started with AWS is to have a user account, so it is natural to create a specific user account when accessing AWS Command Line Interface (CLI) for a specific purpose. This post will detail how to create a user that has read-only rights to S3 and Glacier Storage so that bucket content size can be determined. The details of how to install the AWS Command Line Interface can be found here and details of the setup are in linked documents from there.
Type in the username desired and leave the checkbox “Generate an access key for each user” checked then click the “Create” button at the bottom.
On this screen make sure to record the “Access Key ID” and “Secret Access Key” that will be needed when setting up AWS CLI software.
Click on the “close” link twice to exit.
Click on the username of the user just created, then click on the “Attach Policy” button to give the user proper rights.
Select “AmazonS3ReadOnlyAccess” and “AmazonGlacierReadOnlyAccess” from the list, then click on the “Attach Policy” button.
Once connected to the AWS Command Line Interface software commands can be run to check bucket content size which will be covered in an alternate post. If your company is using Amazon Web Services or S3 Storage and need help getting it setup properly, then contact us for assistance.
And God will generously provide all you need. Then you will always have everything you need and plenty left over to share with others. As the Scriptures say,
“They share freely and give generously to the poor. Their good deeds will be remembered forever.”
For God is the one who provides seed for the farmer and then bread to eat. In the same way, he will provide and increase your resources and then produce a great harvest of generosity in you. - 2 Corinthians 9:8-10
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.