How-To Web Design

Installing Google Firestore for PHP

Using the NoSQL database Firestore with PHP means you need to install a few tools to make it work. The alternative is to use the JavaScript option but when the Firestore access is put into the more secure mode then the backend option with PHP may be needed.

The Google docs point to installing the gRPC library for PHP via two options, either PECL or with Composer. But as with many technical docs, seem to miss a step or two that leaves the reader a little lost and probably frustrated. Hence I’ll step through what I did here with PECL for Ubuntu 18.04 in WSL on Windows 10. (BTW – I thorough recommend Windows Subsystem for Linux in Windows 10 for web development; it’s a good middle ground of dev tools and productivity experience. Grab the new Windows Terminal app to get that easy terminal access to the WSL instance too.)

First, go and have a read through the docs from Google linked above. It’ll give you the view of what needs doing, and hey, maybe it will work for your environment. Otherwise I’ll describe the steps below that I needed to go through.

NOTE – Where you see a PHP version in a command, make sure you use your current version to get the right framework version.

sudo apt-get update
sudo apt install php7.2-dev autoconf libz-dev php-pear
sudo pecl install grpc

The PECL install step will likely take a few minutes while it goes back to basics and compiles the gRPC module. Lots of information will scroll up the terminal window.

After PECL has completed and successfully created the, you’ll need to follow the instructions in the PECL output and update the php.ini config to ensure the module will be loaded.

After you have edited your php.ini file, reload your web server to ensure new processes are loaded. Use of Apache is assumed with the commands below.

sudo nano /etc/php/7.2/apache2/php.ini
sudo service apache2 restart

Now you should have everything compiled and loaded as expected. If you have a test file that’ll dump out your PHP config, load that and search for “grpc”. If you don’t have a file that’ll do that, I’d suggest you create one in a dev environment to help out with the server config. All you need is the following line in a PHP script file in your web server directory.

echo phpinfo();

Loading that file in your browser should then show us that config section we’re after.

We’re done! Hopefully that worked for you or at least provided some useful information.

How-To Sys Admin

Backup to AWS S3 Bucket

While this is not an uncommon thing to do, I couldn’t find a straight forward example for both databases and file directories. So of course, I had to write my own (albeit based on a database only script from mittsh). For the TLDR; just go to GitHub.

It’s written in python using the ubiquitous boto3 and just reads the config and source databases and directories from a JSON configuration file. In probably less than 5min you can have completed your first backup and then just schedule it from then on out.

NOTE: The use of S3 incurs a cost. You are solely responsible for managing the use of that system and any costs incurred.


Copy the files in this repo or just “git clone” them to the machine that needs backing up.

Change into your new folder and install the libraries listed in the requirements.txt file. i.e. “pip install boto3 argparse –user”

Rename and change the config file to suit your own needs. Run the script manually to make sure your config is working as expected.

If all is good then add it to your crontab to run as often as you like. Each backup file is named with the current timestamp to the second so multiple backups each day can be identified.

Run the backup as below. Full paths defined if you’re putting it into crontab and based on a Ubuntu machine layout. User home is ubuntu in this example as that’s the defualt user name on AWS Ubuntu instances.

/usr/bin/python /home/ubuntu/backup-to-s3/ /home/ubuntu/backup-to-s3/backup-to-s3.json

You can use the AWS S3 key values in the config to split different backups up into S3 keys (like folders) based on your server names or client accounts, etc.

S3 and Glacier

If you have a heap of data in S3 it will start to cost you more than a coffee a day to keep there. But AWS offer cheaper, longer-term storage in another product called Glacier. The nice thing about these two products is that the bucket properties in S3 can automatically “age out” files from S3 into Glacier. So then you only keep the very new backups in S3 and the rest end up in Glacier where a few hundred GB only costs you a coffee per month.

How-To Sys Admin

MySQL Fails to Update on Ubuntu

So your Ubuntu server doesn’t want to upgrade MySQL using apt-get and fails with the following error?

mysql_upgrade: Got error: 1045: Access denied for user 'debian-sys-maint'@'localhost' (using password: YES) while connecting to the MySQL server
Upgrade process encountered error and will not continue.

Thankfully the fix should be fairly easy to carry out. For some reason the password for the MySQL user debian-sys-maint has got out of sync in the MySQL database compared to that stored in /etc/mysql/debian.cnf.

Get the password that the update process thinks is in use from that file. You’ll need to do this as the root user.

grep 'password' /etc/mysql/debian.cnf

The same password should be echo’d twice. Copy and paste the password somewhere safe – like a password manager tool.

Log into MySQL as root from the command line using the normal method below. You will need to use the password for the MySQL root user here when prompted.

mysql -u root -p

Reset the password for the debian-sys-maint user, making sure you substitute in your own password from above.

GRANT ALL PRIVILEGES ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED BY 'password-here';

Now if you run the upgrade process again, it should progress and complete any MySQL server upgrades as needed.

How-To Sys Admin

Allowing SSH Key Based Logins from Another System

I have a Digital Ocean server that I SSH into from my laptop for mainly development purposes. But I also want to do scheduled downloads of the server backups from a server at home. So I need to SSH from a new machine to my server with no user prompt. Easy, but it always prompts me for a pass phrase and I have multiple keys in use on my home server.

While you could just copy your private keys from Client1 to Client2 in order to do this, it’s not a great thing to be doing security-wise. So let’s just not do that.

What you need to do is create a new key pair on Client2 (actually my home server) with,


When prompted, make sure you tell it to use a new key file if you have existing keys. If you don’t do that it’ll overwrite your old ones and you’ll be testing your recovery process. When prompted for a pass phrase, just leave it blank and hit Enter. While a pass phrase would be more secure, I want to use this SSH connection to automatically connect as part of a crontab job. So no one will be able to enter a pass phrase anyway.

So now we have a fresh keypair on Client2, say in a file called id_rsa_2. We need to get the public key to our remote server so it’ll trust us when we connect. We do that with a simple copy and append command,

cat ~/.ssh/ | ssh <your-user>@<your-server> “cat >> ~/.ssh/authorized_keys”

When you run that command you’ll be prompted for your password as normal as we’re in the process of loading up the keys.

Now we have a new key pair and have copied the public key to the remote server so it trusts us when we connect. But if Client2 has multiple key pairs in use (i.e. we had to use id_rsa_2 as otherwise we would have overwritten existing keys), how does SSH on Client2 know which keys to use? By default it’ll always use the first key pair and not our new one.

The simple solution is to create a config file in Client2 called ~/.ssh/config and define a Host and which keys to use.

Host <your-server>
IdentityFile ~/.ssh/id_rsa_2

Now you should be able to SSH from your second machine to your remote server with new keys and by using the keys, not have to enter a password.

Code Security Sys Admin

Permissions Problems with git pull

I’ve started working on Doc5 from a laptop in the last few months and have begun the pull/push process to get my Bitbucket repo and desktop machine all in sync. But when trying to get these sorted I found permissions problems on one of the local repos. When I tried to do a pull I had about eight files that either couldn’t be unlinked or couldn’t be created.

If I looked at the permissions on the files I was the owner, www-data (Apache in Ubuntu) was the group and the permissions where 644 on the files and 755 on the directories in my project folder. So that all seemed fine.

But what you need to watch for is the extra permissions that a process needs in order to unlink. What git is doing is taking these files away and then replacing them in the folder. i.e. it’s not just a modification through a write action to the file.

Sys Admin

Apport Disk Full Error Using apt-get

If you’re a Ubuntu user who finds themselves with an ugly message like this one day when running a apt-get update,

No apport report written because the error message indicates a disk full error

you may have thought you’ve run out of disk space and run the command,

df -h

but then found you had plenty of space free. Well maybe you do have plenty of bytes free but what about inodes? They’re effectively a limitation of the number of files you can have in a filesystem.

Sys Admin

Watching your Network Usage

I’m sure everyone else knows about the iftop tool, but it was new to me. I needed to confirm that traffic from DRBD was using a particular interface and iftop does the job by showing traffic sources, speed and culmulative data counts per interface.

The iftop tool shows you usage per interface

How-To Sys Admin

Missing Network Interfaces in Ubuntu Under VMware ESXi

Every now and again I clone a VM and add it to another host. ESXi prompts you for a new UID when you start the VM and I always remove the virtual network card(s) from the machine and re-add them later. I do this to make sure I don’t have two machines with the same MAC addresses on the network. But if you do this with Ubuntu, the new NIC(s) don’t get picked up by the OS. This is almost certainly not specific to VMware or their ESXi product, it’s just the environment I’m using.

This problem seems to be caused by a lack of automatic hardware probing at boot, probably for a good reason but I’m no Linux kernel guru so won’t make a judgement there. The root of the issue is located in the file /etc/udev/rules.d/70-persistent-net.rules where you’ll see the old interfaces still listed alongside the new ones. Simply remove the old NIC(s) and ensure the new ones have the MAC addresses you expect and the correct ethx labels. Give the system a reboot and you should be happy.

Steps to resolve a missing network interface in Ubuntu 10.04 Lucid Lynx (and possibly earlier):

  1. sudo nano /etc/udev/rules.d/70-persistent-net.rules
  2. Delete the lines with the old interfaces after comparing with your VMs newly assigned MAC addresses.
  3. Confirm the interface names are what you expect at the end of each line.
  4. Ctrl-X to save and exit.
  5. sudo shutdown -r now
  6. Run ifconfig to confirm the interfaces are up with the correct IPs.
  7. If the interfaces are up, check your /etc/network/interfaces config to adjust IP settings as required.
How-To Sys Admin

Installing VMware Server 2.0.2 in Ubuntu 10.04

After updating my trusty old server to Ubuntu Lucid Lynx 10.04 the installation of VMware Server 2.0.1 started giving problems. Resinstalling VMware didn’t help as I was repeatedly getting compilation problems in vmmon and vmnet modules. Luckily I stumbled across the following process from one of the VMware forum pages which pointed to a great work-around from the radu cotescu site.

So I take no credit for this but simply repeat it here so that the search gods may recognise it’s usefulness and +1 it’s importance.

Start by downloading VMware Server 2.0.2 from the official VMware site. If you haven’t already got a few licenses, get one now. (They’re free so you might as well get a few) I’m going to assume the downloaded file is in your home directory.

You also need to update the header files for your current kernel so that the configuration scripts from VMware can build the appropriate modules.

sudo apt-get install linux-headers-`uname -r` build-essential

Now just run the following commands.

cd /usr/local/src
sudo wget []
sudo tar xvzf raducotescu-vmware-server-linux-2.6.3x-kernel-592e882.tar.gz
cd raducotescu-vmware-server-linux-2.6.3x-kernel-592e882/
sudo cp /home/<your_username>/VMware-server-2.0.2-203138.i386.tar.gz .
sudo tar xvzf VMware-server-2.0.2-203138.i386.tar.gz
sudo chmod +x

If you have a previous installation of VMware Server, you’ll be prompted that it’ll be removed as part of the install. Don’t worry, any guest VMs you had should still be there afterwards. The script will run through the usual prompts and you’ll see references to the patched files from Radu Cotescu. After a few minutes you should have a working install of VMware Server 2.0.2 on your Ubuntu 10.04 server.

Blogging Code How-To Sys Admin

Running WordPress & PHP Behind ISA Proxy

Some things work well on their own but when mixed make your life hard. Things like Linux and PHP work very well. Microsoft ISA proxy also does a good job in a corporate MS environment. But making the two work together in a controlled environment can be an exercise in frustration.

In this post I’ll pass on the methods I found to get PHP and your Linux boxes talking out through a corporate ISA proxy server. You can then bring in RSS feeds, updates and other things in WordPress and use apt-get to update Ubuntu.