Categories
How-To Web Design

Installing Google Firestore for PHP

Using the NoSQL database Firestore with PHP means you need to install a few tools to make it work. The alternative is to use the JavaScript option but when the Firestore access is put into the more secure mode then the backend option with PHP may be needed.

The Google docs point to installing the gRPC library for PHP via two options, either PECL or with Composer. But as with many technical docs, seem to miss a step or two that leaves the reader a little lost and probably frustrated. Hence I’ll step through what I did here with PECL for Ubuntu 18.04 in WSL on Windows 10. (BTW – I thorough recommend Windows Subsystem for Linux in Windows 10 for web development; it’s a good middle ground of dev tools and productivity experience. Grab the new Windows Terminal app to get that easy terminal access to the WSL instance too.)

First, go and have a read through the docs from Google linked above. It’ll give you the view of what needs doing, and hey, maybe it will work for your environment. Otherwise I’ll describe the steps below that I needed to go through.

NOTE – Where you see a PHP version in a command, make sure you use your current version to get the right framework version.

sudo apt-get update
sudo apt install php7.2-dev autoconf libz-dev php-pear
sudo pecl install grpc

The PECL install step will likely take a few minutes while it goes back to basics and compiles the gRPC module. Lots of information will scroll up the terminal window.

After PECL has completed and successfully created the grpc.so, you’ll need to follow the instructions in the PECL output and update the php.ini config to ensure the module will be loaded.

After you have edited your php.ini file, reload your web server to ensure new processes are loaded. Use of Apache is assumed with the commands below.

sudo nano /etc/php/7.2/apache2/php.ini
sudo service apache2 restart

Now you should have everything compiled and loaded as expected. If you have a test file that’ll dump out your PHP config, load that and search for “grpc”. If you don’t have a file that’ll do that, I’d suggest you create one in a dev environment to help out with the server config. All you need is the following line in a PHP script file in your web server directory.

<?php
echo phpinfo();
?>

Loading that file in your browser should then show us that config section we’re after.

We’re done! Hopefully that worked for you or at least provided some useful information.

Categories
How-To Sys Admin

Backup to AWS S3 Bucket

While this is not an uncommon thing to do, I couldn’t find a straight forward example for both databases and file directories. So of course, I had to write my own (albeit based on a database only script from mittsh). For the TLDR; just go to GitHub.

It’s written in python using the ubiquitous boto3 and just reads the config and source databases and directories from a JSON configuration file. In probably less than 5min you can have completed your first backup and then just schedule it from then on out.

NOTE: The use of S3 incurs a cost. You are solely responsible for managing the use of that system and any costs incurred.

Installation

Copy the files in this repo or just “git clone” them to the machine that needs backing up.

Change into your new folder and install the libraries listed in the requirements.txt file. i.e. “pip install boto3 argparse –user”

Rename and change the config file to suit your own needs. Run the script manually to make sure your config is working as expected.

If all is good then add it to your crontab to run as often as you like. Each backup file is named with the current timestamp to the second so multiple backups each day can be identified.

Run the backup as below. Full paths defined if you’re putting it into crontab and based on a Ubuntu machine layout. User home is ubuntu in this example as that’s the defualt user name on AWS Ubuntu instances.

/usr/bin/python /home/ubuntu/backup-to-s3/backup-to-s3.py /home/ubuntu/backup-to-s3/backup-to-s3.json

You can use the AWS S3 key values in the config to split different backups up into S3 keys (like folders) based on your server names or client accounts, etc.

S3 and Glacier

If you have a heap of data in S3 it will start to cost you more than a coffee a day to keep there. But AWS offer cheaper, longer-term storage in another product called Glacier. The nice thing about these two products is that the bucket properties in S3 can automatically “age out” files from S3 into Glacier. So then you only keep the very new backups in S3 and the rest end up in Glacier where a few hundred GB only costs you a coffee per month.

Categories
Code Security Sys Admin

Permissions Problems with git pull

I’ve started working on Doc5 from a laptop in the last few months and have begun the pull/push process to get my Bitbucket repo and desktop machine all in sync. But when trying to get these sorted I found permissions problems on one of the local repos. When I tried to do a pull I had about eight files that either couldn’t be unlinked or couldn’t be created.

If I looked at the permissions on the files I was the owner, www-data (Apache in Ubuntu) was the group and the permissions where 644 on the files and 755 on the directories in my project folder. So that all seemed fine.

But what you need to watch for is the extra permissions that a process needs in order to unlink. What git is doing is taking these files away and then replacing them in the folder. i.e. it’s not just a modification through a write action to the file.

Categories
Interesting Stuff

Ignore Network Latency at Your Peril

We all know developers need to consider a few things outside their own backyard. Things like hardware and the network affect software even if there’s not much that can be done to control them (even if you’re Apple). This is especially true for the network if you develop software for mobile devices.

So to help us all understand why the nuances of any network are important to all of us, Nic Wise has a good little blog post about what to keep in mind. It’s written in people language and not TCP layers, so we can all benefit from this one.