Categories
Interesting Stuff Security Sys Admin Web Apps

Chrome 70 vs Symantec Certificates

Chrome 70 is about to dis-trust a whole lot of certificates

So you paid lots of money for a “proper” certificate for your HTTPS website after Google gave non-HTTPS sites a hard time? Well, hopefully you aren’t still using an older Symantec issued certificate as Google (and others) is about to stop trusting those certificates.

Chrome version 70 is due for release in September for beta users and will NOT trust certificates issued before December 1 2017 from Symantec, Thawte, GeoTrust and RapidSSL.

This is obviously a big deal and as the Chrome browser release happens before your 12 month (or longer) cert will expire, means there’s work to do. While you’re revisiting the process of procuring another certificate, perhaps also have a think about why you might not be using the free service from Let’s Encrypt. That’s good enough for most websites unless you’re after one of the more fancy looking icons to show up in the browser for things like shopping carts.

Why is this happening?

The Certificate Authorities (aka CAs like Symantec) that are used to issue certificates that secure our web browser traffic MUST be absolutely trusted. Without that trust, the process fails and we might as well just create our own certificates. The reason why we don’t do that is that the browser vendors effectively have a list of those highly trusted CAs and each site’s cert must have a mathematical relationship to one of those.

In 2017 a number of issues were raised about how Symantec had been running one of their CAs (they have a few). Inconsistencies and bad-practice were highlighted such that both Mozilla (who have a list of the issues) and Google decided to implement a change in trust of certs issued by that CA.

Categories
How-To Sys Admin

Backup to AWS S3 Bucket

While this is not an uncommon thing to do, I couldn’t find a straight forward example for both databases and file directories. So of course, I had to write my own (albeit based on a database only script from mittsh). For the TLDR; just go to https://github.com/mikemcmurray/backup-to-s3

It’s written in python using the ubiquitous boto3 and just reads the config and source databases and directories from a JSON configuration file. In probably less than 5min you can have completed your first backup and then just schedule it from then on out.

NOTE: The use of S3 incurs a cost. You are solely responsible for managing the use of that system and any costs incurred.

Installation

Copy the files in this repo or just “git clone” them to the machine that needs backing up. The following will clone the current script into a new folder.

git clone https://github.com/mikemcmurray/backup-to-s3.git backup-to-s3

Change into that new folder and install the libraries listed in the requirements.txt file. i.e. “pip install boto3 argparse –user”

Rename and change the config file to suit your own needs. Run the script manually to make sure your config is working as expected.

If all is good then add it to your crontab to run as often as you like. Each backup file is named with the current timestamp to the second so multiple backups each day can be identified.

Run the backup as below. Full paths defined if you’re putting it into crontab and based on a Ubuntu machine layout. User home is ubuntu in this example as that’s the defualt user name on AWS Ubuntu instances.

/usr/bin/python /home/ubuntu/backup-to-s3/backup-to-s3.py /home/ubuntu/backup-to-s3/backup-to-s3.json

You can use the AWS S3 key values in the config to split different backups up into S3 keys (like folders) based on your server names or client accounts, etc.

S3 and Glacier

If you have a heap of data in S3 it will start to cost you more than a coffee a day to keep there. But AWS offer cheaper, longer-term storage in another product called Glacier. The nice thing about these two products is that the bucket properties in S3 can automatically “age out” files from S3 into Glacier. So then you only keep the very new backups in S3 and the rest end up in Glacier where a few hundred GB only costs you a coffee per month.

Categories
How-To Sys Admin

MySQL Fails to Update on Ubuntu

So your Ubuntu server doesn’t want to upgrade MySQL using apt-get and fails with the following error?

mysql_upgrade: Got error: 1045: Access denied for user 'debian-sys-maint'@'localhost' (using password: YES) while connecting to the MySQL server
Upgrade process encountered error and will not continue.

Thankfully the fix should be fairly easy to carry out. For some reason the password for the MySQL user debian-sys-maint has got out of sync in the MySQL database compared to that stored in /etc/mysql/debian.cnf.

Get the password that the update process thinks is in use from that file. You’ll need to do this as the root user.

grep 'password' /etc/mysql/debian.cnf

The same password should be echo’d twice. Copy and paste the password somewhere safe – like a password manager tool.

Log into MySQL as root from the command line using the normal method below. You will need to use the password for the MySQL root user here when prompted.

mysql -u root -p

Reset the password for the debian-sys-maint user, making sure you substitute in your own password from above.

GRANT ALL PRIVILEGES ON *.* TO 'debian-sys-maint'@'localhost' IDENTIFIED BY 'password-here';

Now if you run the upgrade process again, it should progress and complete any MySQL server upgrades as needed.

Categories
Blogging How-To Sys Admin

WordPress Permalink 404 with HTTPS

The time had come to switch this blog to HTTPS given the ease and cost ($0) of deploying certificates from LetsEncrypt. So that was easily done under Apache – create a new conf file for the SSL site in /etc/apache2/sites-available, and then update the old conf for the non-SSL site to redirect before requesting a new cert using certbot-auto -d mike.mcmurray.co.nz –apache. WP handled that just fine but only the admin pages and the main home page displayed as expected, other pages were just a 404.

So I made the .htaccess file writable by WP and updated the permalink rules from the WP admin console to have the file updated. Nope, still the same.

The rewrite rules are the issue, it’s just that they’re not being allowed to work. The new conf file for the SSL config needs to allow the web server to override the more secure defaults. So this needs to be in the SSL configuration file – note this is a sub-section, not the whole thing.

 <VirtualHost _default_:443>
     ServerAdmin admin@yoursite.com
     ServerName blog.yoursite.com
     ServerAlias blog.yoursite.com
     DocumentRoot /var/www/html/blog

     ErrorLog ${APACHE_LOG_DIR}/error.log
     CustomLog ${APACHE_LOG_DIR}/access.log combined

     <Directory /var/www/html/blog/>
         Options FollowSymLinks
         AllowOverride All
         Order allow,deny
         Allow from all
     </Directory>

     # SSL Engine Switch:
     # Enable/Disable SSL for this virtual host.
     SSLEngine on
     ...

</VirtualHost>

 

Categories
How-To Sys Admin

Allowing SSH Key Based Logins from Another System

I have a Digital Ocean server that I SSH into from my laptop for mainly development purposes. But I also want to do scheduled downloads of the server backups from a server at home. So I need to SSH from a new machine to my server with no user prompt. Easy, but it always prompts me for a pass phrase and I have multiple keys in use on my home server.

While you could just copy your private keys from Client1 to Client2 in order to do this, it’s not a great thing to be doing security-wise. So let’s just not do that.

What you need to do is create a new key pair on Client2 (actually my home server) with,

ssh-keygen

When prompted, make sure you tell it to use a new key file if you have existing keys. If you don’t do that it’ll overwrite your old ones and you’ll be testing your recovery process. When prompted for a pass phrase, just leave it blank and hit Enter. While a pass phrase would be more secure, I want to use this SSH connection to automatically connect as part of a crontab job. So no one will be able to enter a pass phrase anyway.

So now we have a fresh keypair on Client2, say in a file called id_rsa_2. We need to get the public key id_rsa_2.pub to our remote server so it’ll trust us when we connect. We do that with a simple copy and append command,

cat ~/.ssh/id_rsa_2.pub | ssh <your-user>@<your-server> “cat >> ~/.ssh/authorized_keys”

When you run that command you’ll be prompted for your password as normal as we’re in the process of loading up the keys.

Now we have a new key pair and have copied the public key to the remote server so it trusts us when we connect. But if Client2 has multiple key pairs in use (i.e. we had to use id_rsa_2 as otherwise we would have overwritten existing keys), how does SSH on Client2 know which keys to use? By default it’ll always use the first key pair and not our new one.

The simple solution is to create a config file in Client2 called ~/.ssh/config and define a Host and which keys to use.

Host <your-server>
IdentityFile ~/.ssh/id_rsa_2

Now you should be able to SSH from your second machine to your remote server with new keys and by using the keys, not have to enter a password.

Categories
Sys Admin

Change of Host = Change of Performance

As per the previous posts I’ve moved hosts over the last week and now I think everything is across. While having a quick look at the Google Web Developer Tools to check for errors I also see the following little chart indicating the time it takes Google to fetch a page from this sub-domain and WordPress site.

google-site-perf

As you can see the last week shows a very clear decrease in the time to download a page. As far as I can tell the only thing to change has been the provider, and as part of that the underlying web server. They (previous host) use Nginx and now I run what is probably a fairly default LAMP stack. I’m going to assume they can tweak the hell out of that Nginx config they have for all their customers, but it just shows that cheap do it yourself servers on SSD (perhaps key here) can definitely perform.

So after this post I’ll wait for the load to increase and eat my words later. 🙂

Categories
Code Security Sys Admin

Permissions Problems with git pull

I’ve started working on Doc5 from a laptop in the last few months and have begun the pull/push process to get my Bitbucket repo and desktop machine all in sync. But when trying to get these sorted I found permissions problems on one of the local repos. When I tried to do a pull I had about eight files that either couldn’t be unlinked or couldn’t be created.

If I looked at the permissions on the files I was the owner, www-data (Apache in Ubuntu) was the group and the permissions where 644 on the files and 755 on the directories in my project folder. So that all seemed fine.

But what you need to watch for is the extra permissions that a process needs in order to unlink. What git is doing is taking these files away and then replacing them in the folder. i.e. it’s not just a modification through a write action to the file.

Categories
Blogging Sys Admin

Running Your Own Web Server

After nudging the storage allowance on our web host a few times in the last month, I’ve started setting up the same sites on Digital Ocean. Partly to get the 20GB of SSD storage that resolves the current constraint but also to have a play with managing it all myself and working through the automated provisioning experience.

After a day of configuration and following the odd tutorial (admittedly Digital Ocean have some very good content in their help system for how to set up a myriad of different apps and OS’s) I can say it really is much easier just going for the basic “pay for simple access to a shared web server” option if you can. Even just WordPress is a bit of a pain in the arse to tweak for the proper security permissions while letting you do uploads and automatic plugin installs. In my case I need to have the LAMP stack, or at least the file system and web server accessible, but if you don’t, take a minute to think of why you can’t just use wordpress.com.

Our old host also had email set-up as part of the service with unlimited mail accounts, etc. Again with the new option and managing it all myself, I’m not overly looking forward to the fiddling to get Postfix, etc working properly. It’d be great if Google Apps had a much lower price for their per user mailbox hosting option.

Anyway, so far, so good as this blog is the first of the old content to move across to Digital Ocean.

Categories
How-To Sys Admin

CrashPlan Fails to Start on Linux Mint

I’ve just reinstalled Linux again after another Windows 8 attempt (this time 8.1) and I’m trying Linux Mint 17. One of the key apps I install is CrashPlan and when installed on the 64bit version of this OS the desktop app component fails to start.

The answer is to append an option to one of the start script lines as below. It took me a few attempts to access the support page for this, so I’m posting it here too.

Source: http://support.code42.com/CrashPlan/Latest/Troubleshooting/CrashPlan_Client_Closes_In_Some_Linux_Installations

  1. Edit the run.conf file in your CrashPlan app installation directory
    Default location: /usr/local/crashplan/bin/
  2. Navigate to the end of the GUI_JAVA_OPTS section
  3. Add this line, inside the quotes:
    -Dorg.eclipse.swt.browser.DefaultType=mozilla
    Example:
SRV_JAVA_OPTS="-Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan -Xms20m -Xmx512m -Djava.net.preferIPv4Stack=true -Dsun.net.inetaddr.ttl=300 -Dnetworkaddress.cache.ttl=300 -Dsun.net.inetaddr.negative.ttl=0 -Dnetworkaddress.cache.negative.ttl=0 -Dc42.native.md5.enabled=false"
GUI_JAVA_OPTS="-Dfile.encoding=UTF-8 -Dapp=CrashPlanDesktop -DappBaseName=CrashPlan -Xms20m -Xmx512m -Djava.net.preferIPv4Stack=true -Dsun.net.inetaddr.ttl=300 -Dnetworkaddress.cache.ttl=300 -Dsun.net.inetaddr.negative.ttl=0 -Dnetworkaddress.cache.negative.ttl=0 -Dc42.native.md5.enabled=false -Dorg.eclipse.swt.browser.DefaultType=mozilla"
  1. Save the file
    The CrashPlan app should launch properly
​​Important Note: If you uninstall and reinstall the CrashPlan app, or the app automatically upgrades to a new version, the run.conf file is overwritten, causing this issue to reoccur. Update the run.conf file as described above to correct the issue.
Categories
Infrastructure Design Sys Admin

Microsoft Azure 90 Day Trial

I’ve just started a 90 day trial of the Microsoft Azure cloud service as I’ve got a day session next week with MS on the topic. For those of you thinking about giving it a go I suggest you go and jump in now. It’s very easy to sign up (no cost but it does ask for your credit card details) and the management portal is very easy to use. In 5min I have a web site running, a server being provisioned and a domain namespace configured.

There’s also the suggestion that the websites you add remain free after the trial period, but I’m cynical and thinking that you still need to pay for data transfers and storage at least.

azure-demo-2

You can provision a whole raft of different infrastructure from within Azure, some of which are shown on the left. There are plenty of Linux images to kick start your server provisioning off and the websites come with templates for common web apps – blogs, CMS’s, etc. While there are some apps in the later category that use non-MS technologies like MySQL, it seems you can’t provision a standalone database other than a SQL instance. Perhaps to be expected.

Once your new virtual machines are up and running you can download an .rdp file to get access to the server and do your normal tasks. But an RDP session from NZ to the Southeast Asia data centre is a bit slow, so I’m, thinking my home connection is either a little busy or connectivity really is that bad out of NZ.

DNS and other management roles such as AD and the associated namespaces are easy enough to add too. The configuration for the namespace includes all the identity provider set up that will also allow your apps and services to plug into your own source of user info.

All things considered after an hour or two of playing, the Azure 90 day trial looks to be very worthwhile, even just for a play. If you’re a business based around some of these core Microsoft technologies there’s a good chance this may be your “gateway drug” to actually stepping into doing this Cloud stuff for real.