Categories
How-To Web Design

Installing Google Firestore for PHP

Using the NoSQL database Firestore with PHP means you need to install a few tools to make it work. The alternative is to use the JavaScript option but when the Firestore access is put into the more secure mode then the backend option with PHP may be needed.

The Google docs point to installing the gRPC library for PHP via two options, either PECL or with Composer. But as with many technical docs, seem to miss a step or two that leaves the reader a little lost and probably frustrated. Hence I’ll step through what I did here with PECL for Ubuntu 18.04 in WSL on Windows 10. (BTW – I thorough recommend Windows Subsystem for Linux in Windows 10 for web development; it’s a good middle ground of dev tools and productivity experience. Grab the new Windows Terminal app to get that easy terminal access to the WSL instance too.)

First, go and have a read through the docs from Google linked above. It’ll give you the view of what needs doing, and hey, maybe it will work for your environment. Otherwise I’ll describe the steps below that I needed to go through.

NOTE – Where you see a PHP version in a command, make sure you use your current version to get the right framework version.

sudo apt-get update
sudo apt install php7.2-dev autoconf libz-dev php-pear
sudo pecl install grpc

The PECL install step will likely take a few minutes while it goes back to basics and compiles the gRPC module. Lots of information will scroll up the terminal window.

After PECL has completed and successfully created the grpc.so, you’ll need to follow the instructions in the PECL output and update the php.ini config to ensure the module will be loaded.

After you have edited your php.ini file, reload your web server to ensure new processes are loaded. Use of Apache is assumed with the commands below.

sudo nano /etc/php/7.2/apache2/php.ini
sudo service apache2 restart

Now you should have everything compiled and loaded as expected. If you have a test file that’ll dump out your PHP config, load that and search for “grpc”. If you don’t have a file that’ll do that, I’d suggest you create one in a dev environment to help out with the server config. All you need is the following line in a PHP script file in your web server directory.

<?php
echo phpinfo();
?>

Loading that file in your browser should then show us that config section we’re after.

We’re done! Hopefully that worked for you or at least provided some useful information.

Categories
How-To Sys Admin

Backup to AWS S3 Bucket

While this is not an uncommon thing to do, I couldn’t find a straight forward example for both databases and file directories. So of course, I had to write my own (albeit based on a database only script from mittsh). For the TLDR; just go to https://github.com/mikemcmurray/backup-to-s3

It’s written in python using the ubiquitous boto3 and just reads the config and source databases and directories from a JSON configuration file. In probably less than 5min you can have completed your first backup and then just schedule it from then on out.

NOTE: The use of S3 incurs a cost. You are solely responsible for managing the use of that system and any costs incurred.

Installation

Copy the files in this repo or just “git clone” them to the machine that needs backing up. The following will clone the current script into a new folder.

git clone https://github.com/mikemcmurray/backup-to-s3.git backup-to-s3

Change into that new folder and install the libraries listed in the requirements.txt file. i.e. “pip install boto3 argparse –user”

Rename and change the config file to suit your own needs. Run the script manually to make sure your config is working as expected.

If all is good then add it to your crontab to run as often as you like. Each backup file is named with the current timestamp to the second so multiple backups each day can be identified.

Run the backup as below. Full paths defined if you’re putting it into crontab and based on a Ubuntu machine layout. User home is ubuntu in this example as that’s the defualt user name on AWS Ubuntu instances.

/usr/bin/python /home/ubuntu/backup-to-s3/backup-to-s3.py /home/ubuntu/backup-to-s3/backup-to-s3.json

You can use the AWS S3 key values in the config to split different backups up into S3 keys (like folders) based on your server names or client accounts, etc.

S3 and Glacier

If you have a heap of data in S3 it will start to cost you more than a coffee a day to keep there. But AWS offer cheaper, longer-term storage in another product called Glacier. The nice thing about these two products is that the bucket properties in S3 can automatically “age out” files from S3 into Glacier. So then you only keep the very new backups in S3 and the rest end up in Glacier where a few hundred GB only costs you a coffee per month.

Categories
How-To Sys Admin

MySQL Fails to Update on Ubuntu

So your Ubuntu server doesn’t want to upgrade MySQL using apt-get and fails with the following error?

mysql_upgrade: Got error: 1045: Access denied for user 'debian-sys-maint'@'localhost' (using password: YES) while connecting to the MySQL server
Upgrade process encountered error and will not continue.

Thankfully the fix should be fairly easy to carry out. For some reason the password for the MySQL user debian-sys-maint has got out of sync in the MySQL database compared to that stored in /etc/mysql/debian.cnf.

Get the password that the update process thinks is in use from that file. You’ll need to do this as the root user.

grep 'password' /etc/mysql/debian.cnf

The same password should be echo’d twice. Copy and paste the password somewhere safe – like a password manager tool.

Log into MySQL as root from the command line using the normal method below. You will need to use the password for the MySQL root user here when prompted.

mysql -u root -p

Reset the password for the debian-sys-maint user, making sure you substitute in your own password from above.

GRANT ALL PRIVILEGES ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED BY 'password-here';

Now if you run the upgrade process again, it should progress and complete any MySQL server upgrades as needed.

Categories
Blogging How-To Sys Admin

WordPress Permalink 404 with HTTPS

The time had come to switch this blog to HTTPS given the ease and cost ($0) of deploying certificates from LetsEncrypt. So that was easily done under Apache – create a new conf file for the SSL site in /etc/apache2/sites-available, and then update the old conf for the non-SSL site to redirect before requesting a new cert using certbot-auto -d mike.mcmurray.co.nz –apache. WP handled that just fine but only the admin pages and the main home page displayed as expected, other pages were just a 404.

So I made the .htaccess file writable by WP and updated the permalink rules from the WP admin console to have the file updated. Nope, still the same.

The rewrite rules are the issue, it’s just that they’re not being allowed to work. The new conf file for the SSL config needs to allow the web server to override the more secure defaults. So this needs to be in the SSL configuration file – note this is a sub-section, not the whole thing.

 <VirtualHost _default_:443>
     ServerAdmin admin@yoursite.com
     ServerName blog.yoursite.com
     ServerAlias blog.yoursite.com
     DocumentRoot /var/www/html/blog

     ErrorLog ${APACHE_LOG_DIR}/error.log
     CustomLog ${APACHE_LOG_DIR}/access.log combined

     <Directory /var/www/html/blog/>
         Options FollowSymLinks
         AllowOverride All
         Order allow,deny
         Allow from all
     </Directory>

     # SSL Engine Switch:
     # Enable/Disable SSL for this virtual host.
     SSLEngine on
     ...

</VirtualHost>

 

Categories
How-To Sys Admin

Allowing SSH Key Based Logins from Another System

I have a Digital Ocean server that I SSH into from my laptop for mainly development purposes. But I also want to do scheduled downloads of the server backups from a server at home. So I need to SSH from a new machine to my server with no user prompt. Easy, but it always prompts me for a pass phrase and I have multiple keys in use on my home server.

While you could just copy your private keys from Client1 to Client2 in order to do this, it’s not a great thing to be doing security-wise. So let’s just not do that.

What you need to do is create a new key pair on Client2 (actually my home server) with,

ssh-keygen

When prompted, make sure you tell it to use a new key file if you have existing keys. If you don’t do that it’ll overwrite your old ones and you’ll be testing your recovery process. When prompted for a pass phrase, just leave it blank and hit Enter. While a pass phrase would be more secure, I want to use this SSH connection to automatically connect as part of a crontab job. So no one will be able to enter a pass phrase anyway.

So now we have a fresh keypair on Client2, say in a file called id_rsa_2. We need to get the public key id_rsa_2.pub to our remote server so it’ll trust us when we connect. We do that with a simple copy and append command,

cat ~/.ssh/id_rsa_2.pub | ssh <your-user>@<your-server> “cat >> ~/.ssh/authorized_keys”

When you run that command you’ll be prompted for your password as normal as we’re in the process of loading up the keys.

Now we have a new key pair and have copied the public key to the remote server so it trusts us when we connect. But if Client2 has multiple key pairs in use (i.e. we had to use id_rsa_2 as otherwise we would have overwritten existing keys), how does SSH on Client2 know which keys to use? By default it’ll always use the first key pair and not our new one.

The simple solution is to create a config file in Client2 called ~/.ssh/config and define a Host and which keys to use.

Host <your-server>
IdentityFile ~/.ssh/id_rsa_2

Now you should be able to SSH from your second machine to your remote server with new keys and by using the keys, not have to enter a password.

Categories
How-To Sys Admin

CrashPlan Fails to Start on Linux Mint

I’ve just reinstalled Linux again after another Windows 8 attempt (this time 8.1) and I’m trying Linux Mint 17. One of the key apps I install is CrashPlan and when installed on the 64bit version of this OS the desktop app component fails to start.

The answer is to append an option to one of the start script lines as below. It took me a few attempts to access the support page for this, so I’m posting it here too.

Source: http://support.code42.com/CrashPlan/Latest/Troubleshooting/CrashPlan_Client_Closes_In_Some_Linux_Installations

  1. Edit the run.conf file in your CrashPlan app installation directory
    Default location: /usr/local/crashplan/bin/
  2. Navigate to the end of the GUI_JAVA_OPTS section
  3. Add this line, inside the quotes:
    -Dorg.eclipse.swt.browser.DefaultType=mozilla
    Example:
SRV_JAVA_OPTS="-Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan -Xms20m -Xmx512m -Djava.net.preferIPv4Stack=true -Dsun.net.inetaddr.ttl=300 -Dnetworkaddress.cache.ttl=300 -Dsun.net.inetaddr.negative.ttl=0 -Dnetworkaddress.cache.negative.ttl=0 -Dc42.native.md5.enabled=false"
GUI_JAVA_OPTS="-Dfile.encoding=UTF-8 -Dapp=CrashPlanDesktop -DappBaseName=CrashPlan -Xms20m -Xmx512m -Djava.net.preferIPv4Stack=true -Dsun.net.inetaddr.ttl=300 -Dnetworkaddress.cache.ttl=300 -Dsun.net.inetaddr.negative.ttl=0 -Dnetworkaddress.cache.negative.ttl=0 -Dc42.native.md5.enabled=false -Dorg.eclipse.swt.browser.DefaultType=mozilla"
  1. Save the file
    The CrashPlan app should launch properly
​​Important Note: If you uninstall and reinstall the CrashPlan app, or the app automatically upgrades to a new version, the run.conf file is overwritten, causing this issue to reoccur. Update the run.conf file as described above to correct the issue.
Categories
How-To Sys Admin

DRBD: Forcing a full re-sync in a split-brain situation

I have a DRBD setup similar to an old post that’s being used between two Ubuntu servers hosting MySQL. Every few months though the pair goes into a split-brain situation where the secondary can’t see the primary and refuses to reconnect. Users are unaffected as the primary is still working fine, but the HA is lost.

After trying a few different combinations of commands this is what seems to work best for me and cause the quickest recovery. I’m only dealing with a 10GB device so a full sync takes about 10min. If you’re using DRBD for a much larger device, make sure you consider the sync time before doing this.

On the secondary node:
drbdadm secondary all
drbdadm disconnect all (it's status goes to Secondary/Unknown)
drbdadm invalidate all
drbdadm connect all

On the functioning primary node:
drbdadm connect all (a full sync now starts)

Remember, it’s your data you’re dealing with so make sure you’re responsible before you run commands like this.

Update – no sign of the root cause of the issue either. After a system update that included the drbd package, things seem to have settled down.

Categories
How-To Sys Admin

Missing Network Interfaces in Ubuntu Under VMware ESXi

Every now and again I clone a VM and add it to another host. ESXi prompts you for a new UID when you start the VM and I always remove the virtual network card(s) from the machine and re-add them later. I do this to make sure I don’t have two machines with the same MAC addresses on the network. But if you do this with Ubuntu, the new NIC(s) don’t get picked up by the OS. This is almost certainly not specific to VMware or their ESXi product, it’s just the environment I’m using.

This problem seems to be caused by a lack of automatic hardware probing at boot, probably for a good reason but I’m no Linux kernel guru so won’t make a judgement there. The root of the issue is located in the file /etc/udev/rules.d/70-persistent-net.rules where you’ll see the old interfaces still listed alongside the new ones. Simply remove the old NIC(s) and ensure the new ones have the MAC addresses you expect and the correct ethx labels. Give the system a reboot and you should be happy.

Steps to resolve a missing network interface in Ubuntu 10.04 Lucid Lynx (and possibly earlier):

  1. sudo nano /etc/udev/rules.d/70-persistent-net.rules
  2. Delete the lines with the old interfaces after comparing with your VMs newly assigned MAC addresses.
  3. Confirm the interface names are what you expect at the end of each line.
  4. Ctrl-X to save and exit.
  5. sudo shutdown -r now
  6. Run ifconfig to confirm the interfaces are up with the correct IPs.
  7. If the interfaces are up, check your /etc/network/interfaces config to adjust IP settings as required.
Categories
How-To Sys Admin

Installing VMware Server 2.0.2 in Ubuntu 10.04

After updating my trusty old server to Ubuntu Lucid Lynx 10.04 the installation of VMware Server 2.0.1 started giving problems. Resinstalling VMware didn’t help as I was repeatedly getting compilation problems in vmmon and vmnet modules. Luckily I stumbled across the following process from one of the VMware forum pages which pointed to a great work-around from the radu cotescu site.

So I take no credit for this but simply repeat it here so that the search gods may recognise it’s usefulness and +1 it’s importance.

Start by downloading VMware Server 2.0.2 from the official VMware site. If you haven’t already got a few licenses, get one now. (They’re free so you might as well get a few) I’m going to assume the downloaded file is in your home directory.

You also need to update the header files for your current kernel so that the configuration scripts from VMware can build the appropriate modules.

sudo apt-get install linux-headers-`uname -r` build-essential

Now just run the following commands.

cd /usr/local/src
sudo wget [http://codebin.cotescu.com/vmware/vmware-server-2.0.x-kernel-2.6.3x-install.sh]
sudo tar xvzf raducotescu-vmware-server-linux-2.6.3x-kernel-592e882.tar.gz
cd raducotescu-vmware-server-linux-2.6.3x-kernel-592e882/
sudo cp /home/<your_username>/VMware-server-2.0.2-203138.i386.tar.gz .
sudo tar xvzf VMware-server-2.0.2-203138.i386.tar.gz
sudo chmod +x vmware-server-2.0.x-kernel-2.6.3x-install.sh
./vmware-server-2.0.x-kernel-2.6.3x-install.sh

If you have a previous installation of VMware Server, you’ll be prompted that it’ll be removed as part of the install. Don’t worry, any guest VMs you had should still be there afterwards. The script will run through the usual prompts and you’ll see references to the patched files from Radu Cotescu. After a few minutes you should have a working install of VMware Server 2.0.2 on your Ubuntu 10.04 server.

Categories
Blogging Code How-To Sys Admin

Running WordPress & PHP Behind ISA Proxy

Some things work well on their own but when mixed make your life hard. Things like Linux and PHP work very well. Microsoft ISA proxy also does a good job in a corporate MS environment. But making the two work together in a controlled environment can be an exercise in frustration.

In this post I’ll pass on the methods I found to get PHP and your Linux boxes talking out through a corporate ISA proxy server. You can then bring in RSS feeds, updates and other things in WordPress and use apt-get to update Ubuntu.