Category Archives: Sys Admin

WordPress Permalink 404 with HTTPS

The time had come to switch this blog to HTTPS given the ease and cost ($0) of deploying certificates from LetsEncrypt. So that was easily done under Apache – create a new conf file for the SSL site in /etc/apache2/sites-available, and then update the old conf for the non-SSL site to redirect before requesting a new cert using certbot-auto -d mike.mcmurray.co.nz –apache. WP handled that just fine but only the admin pages and the main home page displayed as expected, other pages were just a 404.

So I made the .htaccess file writable by WP and updated the permalink rules from the WP admin console to have the file updated. Nope, still the same.

The rewrite rules are the issue, it’s just that they’re not being allowed to work. The new conf file for the SSL config needs to allow the web server to override the more secure defaults. So this needs to be in the SSL configuration file – note this is a sub-section, not the whole thing.

 <VirtualHost _default_:443>
     ServerAdmin admin@yoursite.com
     ServerName blog.yoursite.com
     ServerAlias blog.yoursite.com
     DocumentRoot /var/www/html/blog

     ErrorLog ${APACHE_LOG_DIR}/error.log
     CustomLog ${APACHE_LOG_DIR}/access.log combined

     <Directory /var/www/html/blog/>
         Options FollowSymLinks
         AllowOverride All
         Order allow,deny
         Allow from all
     </Directory>

     # SSL Engine Switch:
     # Enable/Disable SSL for this virtual host.
     SSLEngine on
     ...

</VirtualHost>

 

Allowing SSH Key Based Logins from Another System

I have a Digital Ocean server that I SSH into from my laptop for mainly development purposes. But I also want to do scheduled downloads of the server backups from a server at home. So I need to SSH from a new machine to my server with no user prompt. Easy, but it always prompts me for a pass phrase and I have multiple keys in use on my home server.

While you could just copy your private keys from Client1 to Client2 in order to do this, it’s not a great thing to be doing security-wise. So let’s just not do that.

What you need to do is create a new key pair on Client2 (actually my home server) with,

ssh-keygen

When prompted, make sure you tell it to use a new key file if you have existing keys. If you don’t do that it’ll overwrite your old ones and you’ll be testing your recovery process. When prompted for a pass phrase, just leave it blank and hit Enter. While a pass phrase would be more secure, I want to use this SSH connection to automatically connect as part of a crontab job. So no one will be able to enter a pass phrase anyway.

So now we have a fresh keypair on Client2, say in a file called id_rsa_2. We need to get the public key id_rsa_2.pub to our remote server so it’ll trust us when we connect. We do that with a simple copy and append command,

cat ~/.ssh/id_rsa_2.pub | ssh <your-user>@<your-server> “cat >> ~/.ssh/authorized_keys”

When you run that command you’ll be prompted for your password as normal as we’re in the process of loading up the keys.

Now we have a new key pair and have copied the public key to the remote server so it trusts us when we connect. But if Client2 has multiple key pairs in use (i.e. we had to use id_rsa_2 as otherwise we would have overwritten existing keys), how does SSH on Client2 know which keys to use? By default it’ll always use the first key pair and not our new one.

The simple solution is to create a config file in Client2 called ~/.ssh/config and define a Host and which keys to use.

Host <your-server>
IdentityFile ~/.ssh/id_rsa_2

Now you should be able to SSH from your second machine to your remote server with new keys and by using the keys, not have to enter a password.

Change of Host = Change of Performance

As per the previous posts I’ve moved hosts over the last week and now I think everything is across. While having a quick look at the Google Web Developer Tools to check for errors I also see the following little chart indicating the time it takes Google to fetch a page from this sub-domain and WordPress site.

google-site-perf

As you can see the last week shows a very clear decrease in the time to download a page. As far as I can tell the only thing to change has been the provider, and as part of that the underlying web server. They (previous host) use Nginx and now I run what is probably a fairly default LAMP stack. I’m going to assume they can tweak the hell out of that Nginx config they have for all their customers, but it just shows that cheap do it yourself servers on SSD (perhaps key here) can definitely perform.

So after this post I’ll wait for the load to increase and eat my words later. 🙂

Permissions Problems with git pull

I’ve started working on Doc5 from a laptop in the last few months and have begun the pull/push process to get my Bitbucket repo and desktop machine all in sync. But when trying to get these sorted I found permissions problems on one of the local repos. When I tried to do a pull I had about eight files that either couldn’t be unlinked or couldn’t be created.

If I looked at the permissions on the files I was the owner, www-data (Apache in Ubuntu) was the group and the permissions where 644 on the files and 755 on the directories in my project folder. So that all seemed fine.

But what you need to watch for is the extra permissions that a process needs in order to unlink. What git is doing is taking these files away and then replacing them in the folder. i.e. it’s not just a modification through a write action to the file. Continue reading Permissions Problems with git pull

Running Your Own Web Server

After nudging the storage allowance on our web host a few times in the last month, I’ve started setting up the same sites on Digital Ocean. Partly to get the 20GB of SSD storage that resolves the current constraint but also to have a play with managing it all myself and working through the automated provisioning experience.

After a day of configuration and following the odd tutorial (admittedly Digital Ocean have some very good content in their help system for how to set up a myriad of different apps and OS’s) I can say it really is much easier just going for the basic “pay for simple access to a shared web server” option if you can. Even just WordPress is a bit of a pain in the arse to tweak for the proper security permissions while letting you do uploads and automatic plugin installs. In my case I need to have the LAMP stack, or at least the file system and web server accessible, but if you don’t, take a minute to think of why you can’t just use wordpress.com.

Our old host also had email set-up as part of the service with unlimited mail accounts, etc. Again with the new option and managing it all myself, I’m not overly looking forward to the fiddling to get Postfix, etc working properly. It’d be great if Google Apps had a much lower price for their per user mailbox hosting option.

Anyway, so far, so good as this blog is the first of the old content to move across to Digital Ocean.

CrashPlan Fails to Start on Linux Mint

I’ve just reinstalled Linux again after another Windows 8 attempt (this time 8.1) and I’m trying Linux Mint 17. One of the key apps I install is CrashPlan and when installed on the 64bit version of this OS the desktop app component fails to start.

The answer is to append an option to one of the start script lines as below. It took me a few attempts to access the support page for this, so I’m posting it here too.

Source: http://support.code42.com/CrashPlan/Latest/Troubleshooting/CrashPlan_Client_Closes_In_Some_Linux_Installations

  1. Edit the run.conf file in your CrashPlan app installation directory
    Default location: /usr/local/crashplan/bin/
  2. Navigate to the end of the GUI_JAVA_OPTS section
  3. Add this line, inside the quotes:
    -Dorg.eclipse.swt.browser.DefaultType=mozilla
    Example:
SRV_JAVA_OPTS="-Dfile.encoding=UTF-8 -Dapp=CrashPlanService -DappBaseName=CrashPlan -Xms20m -Xmx512m -Djava.net.preferIPv4Stack=true -Dsun.net.inetaddr.ttl=300 -Dnetworkaddress.cache.ttl=300 -Dsun.net.inetaddr.negative.ttl=0 -Dnetworkaddress.cache.negative.ttl=0 -Dc42.native.md5.enabled=false"
GUI_JAVA_OPTS="-Dfile.encoding=UTF-8 -Dapp=CrashPlanDesktop -DappBaseName=CrashPlan -Xms20m -Xmx512m -Djava.net.preferIPv4Stack=true -Dsun.net.inetaddr.ttl=300 -Dnetworkaddress.cache.ttl=300 -Dsun.net.inetaddr.negative.ttl=0 -Dnetworkaddress.cache.negative.ttl=0 -Dc42.native.md5.enabled=false -Dorg.eclipse.swt.browser.DefaultType=mozilla"
  1. Save the file
    The CrashPlan app should launch properly
​​Important Note: If you uninstall and reinstall the CrashPlan app, or the app automatically upgrades to a new version, the run.conf file is overwritten, causing this issue to reoccur. Update the run.conf file as described above to correct the issue.

Microsoft Azure 90 Day Trial

I’ve just started a 90 day trial of the Microsoft Azure cloud service as I’ve got a day session next week with MS on the topic. For those of you thinking about giving it a go I suggest you go and jump in now. It’s very easy to sign up (no cost but it does ask for your credit card details) and the management portal is very easy to use. In 5min I have a web site running, a server being provisioned and a domain namespace configured.

There’s also the suggestion that the websites you add remain free after the trial period, but I’m cynical and thinking that you still need to pay for data transfers and storage at least.

azure-demo-2

You can provision a whole raft of different infrastructure from within Azure, some of which are shown on the left. There are plenty of Linux images to kick start your server provisioning off and the websites come with templates for common web apps – blogs, CMS’s, etc. While there are some apps in the later category that use non-MS technologies like MySQL, it seems you can’t provision a standalone database other than a SQL instance. Perhaps to be expected.

Once your new virtual machines are up and running you can download an .rdp file to get access to the server and do your normal tasks. But an RDP session from NZ to the Southeast Asia data centre is a bit slow, so I’m, thinking my home connection is either a little busy or connectivity really is that bad out of NZ.

DNS and other management roles such as AD and the associated namespaces are easy enough to add too. The configuration for the namespace includes all the identity provider set up that will also allow your apps and services to plug into your own source of user info.

All things considered after an hour or two of playing, the Azure 90 day trial looks to be very worthwhile, even just for a play. If you’re a business based around some of these core Microsoft technologies there’s a good chance this may be your “gateway drug” to actually stepping into doing this Cloud stuff for real.

Activating Windows 8 64bit Enterprise from TechNet

For some reason the 64bit download of Windows 8 Enterprise from TechNet does not prompt for a license key but does try and activate – and then fails every time. When it does enter the normal activation process you’re likely to see a DNS error or something along the lines of being unable to connect to the remote activation service.

So credit to the forum at Techplex for the simple solution, albeit unclear why it is needed in the first place.

Solution: Right-click in the lower left screen corner and open a command prompt as admin. Type in the following and hit Enter.

slui.exe 3

The Windows Activation process will start, you can enter your TechNet license key and then the app will connect to the remote system and you should then have a properly activated Windows 8 device.

IIS User Authentication

I’ve been designing a new secure Windows domain whose users need access to an IIS website in another domain. The obvious question is, “How can we transparently auth users to that site from both domains?” which IIS looks to make pretty easy – as long as there is a domain trust in place.

Looking around for more info I found an excellent article on WindowsITPro that explains all the various IIS authentication types. So I needed to share its goodness.

Apport Disk Full Error Using apt-get

If you’re a Ubuntu user who finds themselves with an ugly message like this one day when running a apt-get update,

No apport report written because the error message indicates a disk full error

you may have thought you’ve run out of disk space and run the command,

df -h

but then found you had plenty of space free. Well maybe you do have plenty of bytes free but what about inodes? They’re effectively a limitation of the number of files you can have in a filesystem. Continue reading Apport Disk Full Error Using apt-get