Moving the second (intermission)

I had successfully proved that the concept (that migrating a php front-ended, MySQL back-ended website hosted with a commercial webhost, based in Arizona, to my NAS here in the UK) was sound.

And I had documented the steps and processes that need to be gone through, in order to make it all happen.

I had signed off with a light-hearted statement about learning to migrate the associated mail accounts after a cup of tea.

Yeah, well I’m struggling with the mail thing.

But while I struggle, here’s a thing.

I am hyperanal about security, and have a number of default characteristics set up, on my NAS, including automatic IP blacklisting after x successful attempts to log on (where x is a number I’m not disclosing), and instant SMS alerts of various events to my phone.

So, a few days ago I enabled my NASs mailserver and began configuring it.

Within 24 hours of enabling mailserver, I started getting attempted penetration alerts on the mailserver.

My alerts look like this:

  1. The IP address [] experienced x failed attempts when attempting to log into Mail Server running, and was blocked at Sun Sep 22 08:04:34 2013
  2. The IP address [] experienced x failed attempts when attempting to log into Mail Server running, and was blocked at Sun Sep 22 09:04:34 2013
  3. The IP address [] experienced x failed attempts when attempting to log into Mail Server running, and was blocked at Sun Sep 22 10:04:34 2013
  4. The IP address [] experienced x failed attempts when attempting to log into Mail Server running, and was blocked at Sun Sep 22 11:04:34 2013
  • The 1st IP is registered in Brasil
  • The 2nd IP is regsitered in Brasil
  • The 3rd IP is registered in Malaysia
  • The 4th IP is registered in India

My question is, given that these penetration attempts have targeted the mailserver (not the NAS root), how the flipping flip have they identified that I had begun to configure a mailserver?

I hadn’t enabled any MX record

I hadn’t registered the mailserver anywhere on the web

I hadn’t even completed the mailserver config

I am, frankly, puzzled as to how these bots (I’m assuming they are robots, not real people) have latched on to what I was doing.

I can guess what they’re after. I am assuming the bots are trying to establish a backdoor on my mailserver, from which they can spam the world in the name of any accounts that might have been set up there.

But how did they know?

Moving the second (part IV)

How to migrate a website called from a commercial host to your Synology Diskstation with no loss of uptime:

Before you start, you need:

  • a static IP address from your ISP
  • port forwarding configured on your router

You will also need:

  • a locally-saved backup of your current live database
  • a locally-saved backup of the current live content/files


Steps 1 – 4 are all Synology Diskstation tasks:

1. Control panel -> Web services -> Virtual host ->

  • subfolder name: (enter the website without the TLD suffix) fredbloggs
  • folder name: (enter the full address of the website)
  • click OK


2. Installed packages control panel -> DNS Server -> Zones -> Create Master Zone ->

  • Domain type: (select) Forward Zone
  • Domain name: (enter the full address of the website)
  • Master DNS server: (enter the static IP address)
  • click OK


3. Installed package control panel -> phpMyAdmin -> Databases ->

  • Create database fredbloggs / utf8_general_ci
  • go to database fredbloggs
  • Privileges -> New Add User ->
  • User name: fredbloggs
  • host: localhost
  • password: [whatever]
  • retype: [whatever]
  • Database for user: Grant all privileges on database “fredbloggs”
  • Global privileges: leave all unchecked
  • Resource limits: leave as default
  • Click Add User
  • Click Import
  • Click Choose file
  • Navigate to your locally-saved backup MySQL database
  • Click Open (the database will import)


4. Filestation -> navigate to your locally-saved copy of content/files

  • Copy all backed up, locally-saved content/files to the webfolder ‘fredbloggs’ in the Web directory of your NAS
  • (nb: you may need to change the DB Hostname in your config.php file, to point to ‘localhost’)


5. Log in to your Domain Registrar control panel -> Edit the zone file so the @ record points to your static IP address

6. Drink tea (it could take a few hours for the new server address to propagate around the internet, but while this is happening your website will not drop. nb: do not enter new content to the website until the change has been propagated)


Email accounts associated with that domain name are a kettle of different fish that I haven’t yet got my head around.

I’ll do that after more tea.

Moving the second (part III)


In the best traditions of technical geekery, the next part in the exercise to self-host on a Synology Diskstation NAS was not as straightforward as it could have been.

But I got there.

There are a few very minor gremlins to sort out, but tonight was all about establishing a proof of concept, and implementing a prototype self-hosted, php-based/MySQL-backed website.

I began by taking a copy of WordPress from my downloads folder. I knew it was not the current version, but that was part of my plan – I wanted to see how the self-upgrade function would work on a self-hosting NAS.

I exploded the zipped WordPress files locally, and copied the exploded WP structure and files over to the appropriate web-folder on the NAS.

Then I fired up the phpMyAdmin console and attempted to create a new MySQL database.


phpMyAdmin could navigate around the default structure of MySQL databases, but it wouldn’t create anything.

I checked the permissions which, naturally, were good – I had logged in as root – so I tried again. And again.

The console just hung on a create command.

I navigated to users and tried to create a new user, but that achieved the same square-root-of-absolutely-nothing result.

I then spent a good 20 minutes googling various combinations of the words ‘phpMyAdmin fails to create new database and user’.

The only thing I learned is that a significantly large number of people don’t know the difference between hosted and self-hosted, and root and full access permissions.


So then I looked at shelling out to the MySQL command prompt and using command-line syntax to create the database.

Fortunately MySQL syntax is a variation on the PL/SQL that I speak, so, refreshed in the patois of MySQL, I rolled up my sleeves and…

… thought about it.

Why is phpMyAdmin allowing me to do everything except execute any type of write command?

I scouted around the internet for a phpMyAdmin upgrade, because the more I thought about the problem, the more it began to feel like a command/interface issue.

I went on to Synology’s website and, while looking for a phpMyAdmin upgrade I discovered that the operating system of my NAS was two versions out of date.

It was a bit of a swervy long-shot, but I gave it a go and upgraded the OS.


Then I ran phpMyAdmin again, navigated to the root of MySQL and, exactly as I had done several times before, I attempted to create a new database.

It only bloody worked.

I have no explanation for why, I can’t believe that having a version deficiency in the OS would stop phpMyAdmin from writing to MySQL, however…

Each of the two version upgrades required the NAS to be restarted, so maybe restarting the NAS was the catalyst that fixed the problem?

The old “switch it off and switch it back on again” technique?

If so, I’m slightly disappointed, because I thought Linux and MySQL were above such fripperies.

But, meanwhile, back at the geek factory…

I created a user for that newly created database, and allocated permissions and a password to it.

I fished out the config.php file from the WordPress root, and edited in to it the details of the database and the user I had just created, filled in the password and the hostname, saved it back to the WordPress root and then, in a browser, entered the URI.

I was expecting to see either a vanilla WordPress installation on the website or the WordPress config screen.

Instead I got a database error.

I fished back in to the root of WordPress, renamed the config.php to something the system wouldn’t be able to identify.

Flipped back to the browser, entered the URI again and got the WordPress setup/config creation page.

I entered the details of the database/user/host etc and accepted the changes, and ran straight in to a vanilla WordPress installation.

Brilliant, it worked!


Because I had used an old version of WordPress, the system prompted me to go and grab an upgrade.

When I tried to run through the usual WordPress upgrade procedure I got this screen:

The problem is, as geeks might have guessed, I haven’t used FTP anywhere in this process.

I hadn’t even configured FTP.

So I did the manual upgrade (downloaded the WordPress zipped package, exploded it, copied the exploded files in to the root webfolder) and then hit the refresh button.

And got a database error.

So I fished back in to the WordPress root, deleted the config.php, refreshed the website and, as expected, got the WordPress installation page again.

I filled in the database, host, username etc (again) and the website loaded.

Since then I’ve been playing with load functions, and attempting to analyse performance against user function and database activity.

The most that I can move the needle is to get the NAS resource monitor to flick up to 4kb/s of upload (to the web) activity, concurrently with 3kb/s of download (from the internet) traffic.

I don’t know how these numbers stack up in the Grand Scheme Of Things, but in my world right now, an activity ratio of 4kb/s over 3kb/s, on a combination of back-end and front-end utilisation is pretty encouraging.

But it is only one website.

Though it is loading lightningly fast!


The bottom line is that I need to investigate the FTP functionality, to allow WordPress to self-upgrade and, importantly, allow the WordPress plugins to self-upgrade too.

But that is the only thing left to do.

I’ve proved, with this prototype, that I can deliver what was in my head.

I will do no more (he said, trying to sound resolute) work on this concept until after I have moved.

But I know of a very nice pair of PowerEdge servers and a compatible rack for sale, for not much money…

Moving the second (part II)

I had some free time* this evening, so I thought I’d investigate the NAS’s capabilities.

I downloaded, installed and configured an instance of the phpMyAdmin control panel (the Synology Diskstation NAS already has MySQL installed).

Then I used the internal package downloader to grab a copy of WordPress, and installed that as an intranet blog.

Well, I said to myself (I seem to be doing a lot of this, lately. I’m going to have to keep a watch on this. Yes you are. Who said that? I did. Who are you? Look, just get on with documenting what you did, you big geek) if building an intranet website is that easy, why don’t I use the next half an hour to set up an internet website on the NAS.

Well it didn’t take half an hour, obv.

But after a lot of fiddling about within a handful of different modules, and reconfiguring various router and server ports, and firewall rules, everything looked about ready to go.

So I went to the registrar’s control panel of a domain I own that’s been dormant for the last six months, and pointed it at my static IP address, and the @, and A, and nameserver addresses, I had created in the NAS.

Then I did the most crucial part of the whole exercise.

I drank tea.

Lots of tea.

Then I dropped a static index.html file in to the root of the website’s domain I had created on the NAS.

Opened a browser.

Typed in the URI.

And blow me, the bloody thing worked!

The next stage is to install the .php gubbins that is the marvel of WordPress.

Then I’ll have to open up the phpMyAdmin control panel  and create a database.

Then I’ll open up the default wp-config.php and edit in to that the details of the database I’ve just created.

And then I just run the install.php


Well. Yes, I think so.

But it’s 9pm now and that’s bedtime for me

So I’ll do the .php and MySQL/phpMyAdmin stuff tomorrow.


*should have been packing but what the hell

Moving the second

I currently host twenty-three domains in my partner hosting account, based in Arizona.

In the last six months there have been a couple of periodic capacity issues on the shared server. These have manifested themselves in – at best, occasionally slow page loads and – at worst, unreachable websites fronted by server-generated error messages.

A reverse lookup of the IP address shows that there are currently 7,279 websites hosted on that one shared server.

Even though most of them are probably low-volume traffic websites, 7,279 websites on one server is a pretty big number.

Because of this big number, and prompted by the periodic performance issues of the shared server, I’m thinking of moving the twenty-three domains to a new home.

An analysis of traffic shows that none of the 23 domains are particularly high-volume.

The top three get in the region of 250-550 page-views a day, each. Then there are a few specialised websites that have peaks and troughs in visitors, but probably hit an average of around 100-200 page-views a day. The rest are esoteric, highly niche websites that receive very low traffic – around 25-50 page-views a day.

In terms of internet traffic, those figures add up to barely nothing. Any reasonable webserver should be capable of dealing with that kind of demand.

I like the idea of having everything hosted on a shared – but dedicated – server.

So I have been looking in to leasing a dedicated server with a commercial hosting provider. It’s an expensive option. It would give me a naked server, installed with just an operating system.

The geek in me is quite excited by the thoughts of what I’d have to do to that server, in order to turn it in to the host of multiple websites.

A challenge, too.

But fun.

And rewarding, once completed.

So that’s an option, but it’s the second option under consideration.

The first option that I’m going to pursue is to look at hosting a couple of these websites on my Synology Diskstation NAS, just to see how that goes (the NAS already has a static IP address, so all I would need to do is create a zone file with some nameserver details).

In a way this brings the same challenges (and the same opportunities to geek myself to a happy place) as leasing a naked server.

There are a lot of practical things to be learned like, for example, do I mount one instance of MySQL for each website database, or mount a single instance of MySQL and label each website with its own tables.

But yes, hosting my own websites on my own hardware could be a fun thing to do.

My NAS is capable of hosting up to 30 websites, according to the manufacturer’s blurb.

There are many security packages in the NAS software library.

The house I’m moving to is being served with a broadband connection that’s currently delivering 80Mb/s download and 12Mb/s upload, so that’s more than enough bandwidth to host websites through.

So once I’ve moved house, I’m going to give it a go.

I’ll host one or two of my most expendable, least high-traffic websites on the NAS. I’ll take some metrics, and we’ll see how it all goes.

I can’t help noticing there’s a PowerEdge server (2x Quad Core, 2.33Ghz 16GB RAM)  on eBay for £200.

Blogathon 3/13 NewTech!

I have a new toy!

Yes indeedy, the Tablet generation has a new member.

I have finally managed to beg, borrow and/or steal the 32gb version of one of these:





A Nexus 10.




It’s very early days in this fledgling, but hi-tech relationship, but so far things are pretty comfortable.

I’ve installed the same applications on the Nexus 10 that I run on my phone (this blogpost is being typed via the very nifty WordPress application, using the SwiftKey virtual keyboard).

My two laptops have become exclusive to video and audio editing/production, whilst the Nexus 10 has become the primary Internet workhorse.

Because I migrated all of my audio, video, and text-based files and projects on to my NAS, just over a year ago, the Nexus 10 has access, via WiFi, to 1.5tb of multimedia data from anywhere.

One new development is that I’m also using the Nexus 10 as a Kindle.

So I have all this highly portable tech.

Shame I’m mostly using it for Twitter.


Not getting the internet in Ireland and Germany

Irish Newspapers want to make it illegal to link to their articles, or charge people who do (link to article) (via @rodti)

Absolutely scandalous!

And then news came in from @syzygy that the same information battle is being fought in Germany (link to article)

Jesus, are these people not getting it?

Here’s an idea for the German and Irish newspapers; if you don’t want people to link to your articles, instead of trying to control the internet use of the population of the entire planet, why don’t you just stop putting your articles on the internet?

How about that?

Because it seems like the perfect solution to me.

Gagz a plenty

Twitter (no matter how you feel about it) is a medium for good and bad.

Yes, utter morons like Piers Morgan spout their meaningless gibberish to an incomprehensibly large readership.

And yes, there are mentally challenged people like that guy who said evil things to the diver (I’m a little tired, just nod and say ‘yes, I know who you mean’).

But, idiots like these aside (and there are many idiots like these out there), there are many good people.

Sensible, wise, educated, erudite…

I am, obviously, none of these.

I am, occasionally, funny.

More on this in a moment.

But the thing that makes Twitter really work (instead of being a pale parody of itself, as Piers Moron uses it), is to follow back all of the people you follow.

It is, after all, a *social* network.

And when you follow all of your followers, you build up a rapport with them.

And occasionally you meet them.

So this Saturday I’m going to Aston le Walls for Pimm’s, pasties and ponies.

All organised via Twitter; through people I follow/who follow me.

It’s nice.

Anyway, back to the funnies.

Here’s just a selection of my funniest Tweets during the last 48 hours:

  • I have to spend all day in Swindon tomorrow. It’s a bit like community service.
  • My neighbour has just gone to Brecon Jazz Festival. Didn’t have the heart to tell him it’s about music.
  • I’ve sat on this couch and watched so much of the Olympics that I’ve got athletes bum.
  • If lady boxing is the official title of boxing sport for ladies, I can’t wait for the introduction of gardening sport for ladies.
  • Just imagine how riotously funny the Dressage TV commentary would be if it was hosted by Ant and Dec.
  • The dressage will now have a 10-minute break for a course walk. (Eventers joke)
  • Just bought a lottery ticket. Because it’s £148m and that would buy a lot of chocolate and ponies and unicorns and stuff.
  • Sorry, I’ve been out of the loop on this and I’m just catching up. So Fern Britton is pregnant and Jessie J is the father?
  • This is my 42,000th Tweet. It’s a significant milestone in David Cameron’s shining political career. Oh. Wait.
  • Hahaha! London Southend airport. You may as well rename Kidlington airport as London Oxford airport. Oh. Wait…
  • Only just realised that Fence One of the Olympic SJ is not a rustic obstacle (another Eventers joke)
  • I did not just spend an entire meeting considering driving into town to hit Greggs for three vegetable pasties.

See what you’re missing?

Or maybe not.

Rooting for the router (pt 2)

I decided to follow a variation of Daniel’s excellent suggestion.

I ran ping -t for 24 continuous hours and the router behaved itself perfectly.

I started to suspect that running ping -t was forcing the router to keep the connection ‘live’, so yesterday evening I terminated ping -t and went to bed.

This evening I came home, booted up my laptop as usual and pootled about on the internet…

For two hours.

Two hours is the amount of time it took for the Netgear N150 WNR1000v3 Wireless Router to lock up and freeze all internet access.


This time, with internet access locked out, I tried to ping my default gateway. Unsurprisingly I got a big fat Request Timed Out.

So, leaving the router running (but locked out), I connected my laptop to the Netgear N150 WNR1000v3 Wireless Router with a cable.

Guess what!

I got instant internet.


So despite my laptop ‘seeing’ an ‘Excellent’ WiFi signal strength, and despite my laptop saying the status of that WiFi is ‘Connected’, the only way I will be able to connect to the internet is by rebooting the router.

Or using a cable.

Which is, obviously, actionable under the trade description act – for a WiFi router.

In fact, the only reason I can publish this blog post is because I’m still using the cable, directly connecting me to the Netgear N150 WNR1000v3 Wireless Router.

As a final check, I disconnected the cable and attempted to attach – via WiFi – to the router’s admin control panel.

Yep, that worked. I was able to go in, change WiFi channels and update all other router features, via WiFi.

But couldn’t get to the internet, via WiFi.

So this experiment also established that my laptop’s WiFi wasn’t to blame. The problem appears to be the WiFi function of the Netgear N150 WNR1000v3 Wireless Router, that locks out.

On both of the routers.


How come no-one else has experienced/found this problem?

Rooting for the router

When my broadband was upgraded, the BT Openreach engineer installed a BT Openreach Modem which he coupled to a (new) Netgear N150 WNR1000v3 Wireless Router.



One of these





And for a while – a couple of days – everything was fine.

Then I started to get locked out by, what I suspected to be, the wireless router.

I’d be pootling through internet activity (and that could mean downloading audio or video via direct links, or via iTunes, or via media secure portal; or browsing websites, or uploading files of any description) when suddenly I’d get a browser-generated ‘I can’t find the website you’re looking for’ error message.

All download/upload activity would cease, as my internet connection was, effectively, lost. Yet my laptop still maintained a solid WiFi connection with the router.

I would also lose connection to my NAS, which is directly attached to the router, via cable.



My NAS unit






I restarted the router and, when it had completed the reboot, everything was back on line; Internet was back, upload/download continued and my NAS began talking to me again.

Until the next time; a couple of hours later the same thing happened again. I restarted the router again, and was back up and running in a few minutes.

A couple of hours later (there’s no ‘passage of time’ connection to this event – and, also, there is no ‘I was doing this thing’ connection either), my internet connection froze again.

Over the next three days my internet connection froze ten times. That’s ten ‘switch the router off and switch it back on again’ events in 72 hours.

I rang Plusnet – my ISP – and explained the problem. The nice chap on the other end of the phone suggested two things. Firstly, I should do a factory reset on the router. Secondly, I should change the channel that the router broadcast on.

I did these things.

Less than three hours later my internet connection froze again. A couple of hours later it happened again. And even later, it happened again.

I rang Plusnet, we talked about the recurring fault and they said they’d send me a new router.



Good old Plusnet!




Three days later I installed the new router, connected my WiFi and non-WiFi devices to it and off we went.

For five hours.

Then my internet connection froze.

Wanting to be analytical about this, I rebooted the router and then accessed my NAS control panel.



In here I checked for operating system updates (there were none), then I took a deep breath and shut down the NAS unit.




That left me with the N150 WNR1000v3 Wireless Router and two WiFi-enabled devices. I switched the other device off, which left me with the router and my primary laptop.  Not exactly going to tax the router too much, I thought.

A couple of hours later my internet connection froze again.

I rebooted the router.

The internet froze again some time later.

I called Plusnet and explained the problem. The nice guy on the other end of the phone suggested I do a factory reset and change the channel the router broadcast on. I explained I’d done that with the previous router, but I would do it with the replacement one as well. He also suggested that the next time it happened, instead of rebooting the Netgear WiFi router, I left that switched on but that I should reboot the BT Openreach Modem.

So I did these things.

I did a factory reset on the Netgear router, I changed the channel of the router and, when my internet connection inevitably failed a few hours later, I reset the BT Openreach Modem.






Which had no effect at all.

The only way to get my internet connection back was to reset the Netgear Wifi router.

I called Plusnet (who I have absolutely no issues with at all) and asked them to search their records to  see how many instances of the N150 WNR1000v3 Wireless Router locking up, have been reported to them.

None, was their reply.

So what is it that I’m doing, what is it in my setup, that has continually forced two N150 WNR1000v3 Wireless Routers to lock up and freeze my internet connection?

Answers, please, on a used £5-note.