Categories
How-To Web Design

Installing Google Firestore for PHP

Using the NoSQL database Firestore with PHP means you need to install a few tools to make it work. The alternative is to use the JavaScript option but when the Firestore access is put into the more secure mode then the backend option with PHP may be needed.

The Google docs point to installing the gRPC library for PHP via two options, either PECL or with Composer. But as with many technical docs, seem to miss a step or two that leaves the reader a little lost and probably frustrated. Hence I’ll step through what I did here with PECL for Ubuntu 18.04 in WSL on Windows 10. (BTW – I thorough recommend Windows Subsystem for Linux in Windows 10 for web development; it’s a good middle ground of dev tools and productivity experience. Grab the new Windows Terminal app to get that easy terminal access to the WSL instance too.)

First, go and have a read through the docs from Google linked above. It’ll give you the view of what needs doing, and hey, maybe it will work for your environment. Otherwise I’ll describe the steps below that I needed to go through.

NOTE – Where you see a PHP version in a command, make sure you use your current version to get the right framework version.

sudo apt-get update
sudo apt install php7.2-dev autoconf libz-dev php-pear
sudo pecl install grpc

The PECL install step will likely take a few minutes while it goes back to basics and compiles the gRPC module. Lots of information will scroll up the terminal window.

After PECL has completed and successfully created the grpc.so, you’ll need to follow the instructions in the PECL output and update the php.ini config to ensure the module will be loaded.

After you have edited your php.ini file, reload your web server to ensure new processes are loaded. Use of Apache is assumed with the commands below.

sudo nano /etc/php/7.2/apache2/php.ini
sudo service apache2 restart

Now you should have everything compiled and loaded as expected. If you have a test file that’ll dump out your PHP config, load that and search for “grpc”. If you don’t have a file that’ll do that, I’d suggest you create one in a dev environment to help out with the server config. All you need is the following line in a PHP script file in your web server directory.

<?php
echo phpinfo();
?>

Loading that file in your browser should then show us that config section we’re after.

We’re done! Hopefully that worked for you or at least provided some useful information.

Categories
Interesting Stuff Security Sys Admin Web Apps

Chrome 70 vs Symantec Certificates

Chrome 70 is about to dis-trust a whole lot of certificates

So you paid lots of money for a “proper” certificate for your HTTPS website after Google gave non-HTTPS sites a hard time? Well, hopefully you aren’t still using an older Symantec issued certificate as Google (and others) is about to stop trusting those certificates.

Chrome version 70 is due for release in September for beta users and will NOT trust certificates issued before December 1 2017 from Symantec, Thawte, GeoTrust and RapidSSL.

This is obviously a big deal and as the Chrome browser release happens before your 12 month (or longer) cert will expire, means there’s work to do. While you’re revisiting the process of procuring another certificate, perhaps also have a think about why you might not be using the free service from Let’s Encrypt. That’s good enough for most websites unless you’re after one of the more fancy looking icons to show up in the browser for things like shopping carts.

Why is this happening?

The Certificate Authorities (aka CAs like Symantec) that are used to issue certificates that secure our web browser traffic MUST be absolutely trusted. Without that trust, the process fails and we might as well just create our own certificates. The reason why we don’t do that is that the browser vendors effectively have a list of those highly trusted CAs and each site’s cert must have a mathematical relationship to one of those.

In 2017 a number of issues were raised about how Symantec had been running one of their CAs (they have a few). Inconsistencies and bad-practice were highlighted such that both Mozilla (who have a list of the issues) and Google decided to implement a change in trust of certs issued by that CA.

Categories
How-To Sys Admin

Backup to AWS S3 Bucket

While this is not an uncommon thing to do, I couldn’t find a straight forward example for both databases and file directories. So of course, I had to write my own (albeit based on a database only script from mittsh). For the TLDR; just go to https://github.com/mikemcmurray/backup-to-s3

It’s written in python using the ubiquitous boto3 and just reads the config and source databases and directories from a JSON configuration file. In probably less than 5min you can have completed your first backup and then just schedule it from then on out.

NOTE: The use of S3 incurs a cost. You are solely responsible for managing the use of that system and any costs incurred.

Installation

Copy the files in this repo or just “git clone” them to the machine that needs backing up. The following will clone the current script into a new folder.

git clone https://github.com/mikemcmurray/backup-to-s3.git backup-to-s3

Change into that new folder and install the libraries listed in the requirements.txt file. i.e. “pip install boto3 argparse –user”

Rename and change the config file to suit your own needs. Run the script manually to make sure your config is working as expected.

If all is good then add it to your crontab to run as often as you like. Each backup file is named with the current timestamp to the second so multiple backups each day can be identified.

Run the backup as below. Full paths defined if you’re putting it into crontab and based on a Ubuntu machine layout. User home is ubuntu in this example as that’s the defualt user name on AWS Ubuntu instances.

/usr/bin/python /home/ubuntu/backup-to-s3/backup-to-s3.py /home/ubuntu/backup-to-s3/backup-to-s3.json

You can use the AWS S3 key values in the config to split different backups up into S3 keys (like folders) based on your server names or client accounts, etc.

S3 and Glacier

If you have a heap of data in S3 it will start to cost you more than a coffee a day to keep there. But AWS offer cheaper, longer-term storage in another product called Glacier. The nice thing about these two products is that the bucket properties in S3 can automatically “age out” files from S3 into Glacier. So then you only keep the very new backups in S3 and the rest end up in Glacier where a few hundred GB only costs you a coffee per month.

Categories
How-To Sys Admin

MySQL Fails to Update on Ubuntu

So your Ubuntu server doesn’t want to upgrade MySQL using apt-get and fails with the following error?

mysql_upgrade: Got error: 1045: Access denied for user 'debian-sys-maint'@'localhost' (using password: YES) while connecting to the MySQL server
Upgrade process encountered error and will not continue.

Thankfully the fix should be fairly easy to carry out. For some reason the password for the MySQL user debian-sys-maint has got out of sync in the MySQL database compared to that stored in /etc/mysql/debian.cnf.

Get the password that the update process thinks is in use from that file. You’ll need to do this as the root user.

grep 'password' /etc/mysql/debian.cnf

The same password should be echo’d twice. Copy and paste the password somewhere safe – like a password manager tool.

Log into MySQL as root from the command line using the normal method below. You will need to use the password for the MySQL root user here when prompted.

mysql -u root -p

Reset the password for the debian-sys-maint user, making sure you substitute in your own password from above.

GRANT ALL PRIVILEGES ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED BY 'password-here';

Now if you run the upgrade process again, it should progress and complete any MySQL server upgrades as needed.

Categories
Interesting Stuff

Pressing Pause on Work

The French legislation that was signed off in May 2016 and is in effect as of Jan 1st 2017 will be something studied closely by most other countries in the next few years. Part of the law changes (which included other changes to allow employers to more easily dismiss staff) was to have companies define a time when their staff can effectively disconnect from work email.

Almost all companies have been trying to rapidly adopt a “mobile first” approach to their business, mostly to catch up with their customers who are now using mobiles more than any other device. The flow-on effect of this has been to then try the same with their own work force and for good reason. Give your staff the right information at the right time in order to better serve your customers and improve their experience.

But email, the bane of many people’s lives, was always the first and simplest product to get people to use. Away from your desk, in a meeting, on the train, and of course at home long after work hours finished. This has been a growing expectation at many companies that emails are almost like TXT messages; something that needs a prompt, if not immediate response. But email just isn’t that medium, and that expectation is misguided if a company respects and cares about their staff. Some of this is definitely a cultural shift, perhaps with younger employees moving away from email and not having that old mental connection of email to “snail mail” – something that takes time.

In the research done on the subject of stress levels vs email (a topic I’m sure you’re familiar with), it was found that the more you check your email, the higher your stress levels become. If you can’t disconnect and separate your work time from home/play time then your mental health will likely suffer, to the detriment of one or both.

I work a lot with mobile technology and trying to ensure people have the right tools for what they need to do, but I definitely see the advantage of changing expectations of working after hours. I hope the French law changes provide a measurable improvment in the health of those they affect and that more companies choose to do the same and combine them with similar work environment updates for the “modern age” (whatever that means these days). Work from home if you can, interact with those groups you need to for face-to-face time, but when you’re done for the day, press pause on the work side of your life.

Categories
Blogging How-To Sys Admin

WordPress Permalink 404 with HTTPS

The time had come to switch this blog to HTTPS given the ease and cost ($0) of deploying certificates from LetsEncrypt. So that was easily done under Apache – create a new conf file for the SSL site in /etc/apache2/sites-available, and then update the old conf for the non-SSL site to redirect before requesting a new cert using certbot-auto -d mike.mcmurray.co.nz –apache. WP handled that just fine but only the admin pages and the main home page displayed as expected, other pages were just a 404.

So I made the .htaccess file writable by WP and updated the permalink rules from the WP admin console to have the file updated. Nope, still the same.

The rewrite rules are the issue, it’s just that they’re not being allowed to work. The new conf file for the SSL config needs to allow the web server to override the more secure defaults. So this needs to be in the SSL configuration file – note this is a sub-section, not the whole thing.

 <VirtualHost _default_:443>
     ServerAdmin admin@yoursite.com
     ServerName blog.yoursite.com
     ServerAlias blog.yoursite.com
     DocumentRoot /var/www/html/blog

     ErrorLog ${APACHE_LOG_DIR}/error.log
     CustomLog ${APACHE_LOG_DIR}/access.log combined

     <Directory /var/www/html/blog/>
         Options FollowSymLinks
         AllowOverride All
         Order allow,deny
         Allow from all
     </Directory>

     # SSL Engine Switch:
     # Enable/Disable SSL for this virtual host.
     SSLEngine on
     ...

</VirtualHost>

 

Categories
How-To Sys Admin

Allowing SSH Key Based Logins from Another System

I have a Digital Ocean server that I SSH into from my laptop for mainly development purposes. But I also want to do scheduled downloads of the server backups from a server at home. So I need to SSH from a new machine to my server with no user prompt. Easy, but it always prompts me for a pass phrase and I have multiple keys in use on my home server.

While you could just copy your private keys from Client1 to Client2 in order to do this, it’s not a great thing to be doing security-wise. So let’s just not do that.

What you need to do is create a new key pair on Client2 (actually my home server) with,

ssh-keygen

When prompted, make sure you tell it to use a new key file if you have existing keys. If you don’t do that it’ll overwrite your old ones and you’ll be testing your recovery process. When prompted for a pass phrase, just leave it blank and hit Enter. While a pass phrase would be more secure, I want to use this SSH connection to automatically connect as part of a crontab job. So no one will be able to enter a pass phrase anyway.

So now we have a fresh keypair on Client2, say in a file called id_rsa_2. We need to get the public key id_rsa_2.pub to our remote server so it’ll trust us when we connect. We do that with a simple copy and append command,

cat ~/.ssh/id_rsa_2.pub | ssh <your-user>@<your-server> “cat >> ~/.ssh/authorized_keys”

When you run that command you’ll be prompted for your password as normal as we’re in the process of loading up the keys.

Now we have a new key pair and have copied the public key to the remote server so it trusts us when we connect. But if Client2 has multiple key pairs in use (i.e. we had to use id_rsa_2 as otherwise we would have overwritten existing keys), how does SSH on Client2 know which keys to use? By default it’ll always use the first key pair and not our new one.

The simple solution is to create a config file in Client2 called ~/.ssh/config and define a Host and which keys to use.

Host <your-server>
IdentityFile ~/.ssh/id_rsa_2

Now you should be able to SSH from your second machine to your remote server with new keys and by using the keys, not have to enter a password.

Categories
Interesting Stuff

Geo-blocked Content and Business Models

The internet has changed the world we live in dramatically in the last 10 years. This is a fact that no one would dispute. But many businesses are continuing to ignore some of the associated changes that this global connectivity has bought. No longer do the borders of countries matter to data, in that those of us with connectivity can share anything we like.

A business who started on the internet should know what this new world looks like and so should the older content businesses – they’ve had their chance to evolve. Newspapers are very different in many countries now, no longer are they part of the morning ritual and no longer do advertisers queue at their door ready to put up with what was typically a poor experience (ever tried to place a classified ad?).

TV broadcasters are now where newspapers were five or more years ago, and most are acting to embrace rather than fight, the new technology. “On demand” web sites from broadcasters in NZ now often show new content before it is delivered over the air to TV sets. They realise that people can and will get the same content from other sources if they don’t do this and that people want to watch on their own schedules.

The power has shifted away from the broadcasters to the content owners. If people are happy to stream content when they want they often care little for who is providing it. Why are we tied to a broadcaster who simply takes the video, inserts their own ads and then pushes play? As they face this issue they stick to their business model and protect it by forcing their consumers to jump through ever smaller and more restrictive hoops. Want to view this video or listen to this song – sorry, not in your country.

Because of the internet the technology to work around these restrictions is fairly easy to employ for many people. VPNs and DNS configurations allow ways to subvert the geo-blocking restrictions, and are being “consumerised” as apps that Mum and Dad can download and use. Technical changes and smart people will work around what the other tech and smart people create, until we get where we are now; legal threats.

Digital property needs to be recognised as being different from physical property. Theft does not harm the owner in the same way that stealing money or your car does. Yes, consumers should recognise someone’s work and effort and reward them, but consumers also shouldn’t be punished with huge fines due to the loss of a $5 movie rental.

We can’t undo the internet, it’s here to stay and we have to work out a way for quality products to fit into this new world. The new broadcasters (Netflix, Neo, Lightbox, etc) should accept they will all have very similar content and they need to provide the service on top of that to keep customers – not threaten them and split them up by location.

If we can’t work it out, then we might look back on this period and think the internet put a severe dent in human culture because everyone was chasing the money.

Categories
Code Web Apps

Doc5 Wiki Available for Download

Slightly behind with this post but I finally have a new release of Doc5 available for download.

New features include,

  • Full WYSIWYG editing and no more trying to get used to the markup. (Not that it was difficult but people are used to risch editors these days)
  • Complete redesign of the UI.
    Bootstrap makes for an easy to use, clean interface and I really like the design anyway.
  • Easier to use more finely grained permissions.
    Per user permissions for categories and pages and inheritance for pages.
  • Much better file management and easier to link files into pages.
  • Bug fixes and support for different databases with faster access.
  • HTML email templates.
    This will make it easier to extend and handle language translations in the future.
Categories
Sys Admin

Change of Host = Change of Performance

As per the previous posts I’ve moved hosts over the last week and now I think everything is across. While having a quick look at the Google Web Developer Tools to check for errors I also see the following little chart indicating the time it takes Google to fetch a page from this sub-domain and WordPress site.

google-site-perf

As you can see the last week shows a very clear decrease in the time to download a page. As far as I can tell the only thing to change has been the provider, and as part of that the underlying web server. They (previous host) use Nginx and now I run what is probably a fairly default LAMP stack. I’m going to assume they can tweak the hell out of that Nginx config they have for all their customers, but it just shows that cheap do it yourself servers on SSD (perhaps key here) can definitely perform.

So after this post I’ll wait for the load to increase and eat my words later. 🙂