
Guide: How to secure your WordPress blog

With around 200 million users worldwide, Wordpress is not only the most popular blogging tool there is, it's also become one of the most successful content management systems on the web.
So it's no wonder that we periodically hear about rounds of attacks on the platform. The bigger the target, the more likely people are to aim for it.
There are few things more sobering than to wake up one morning and find that your sites have apparently disappeared or that they're suddenly serving malware. It needn't be that way if you maintain control of your Wordpress installation and make it as exploit-proof as possible. It doesn't require constant vigilance – just a bit of tweaking after installation and a secure routine from then on.
Post-install cleanup
After installation, there's some immediate housekeeping that you'll be prompted to do. Don't put it off – do it straight away.
The most important change is to delete or disable the 'install.php' file in the wp-admin folder. That's the file used to connect Wordpress to a database and create a configuration file. It can be removed, or you can FTP to your website and rename it to something like 'installOLD.xxx'.

Web design blogger Jeff Starr suggests a more lateral solution: replace install.php with a fake file that generates an error message and sends you an email informing you there's been a hack attempt. To download his replacement, install.php from his website.
With the installation file safely removed, it's time to turn your attention to the 'admin' user. By default Wordpress creates a user named 'admin', with supreme power over your blog. It also nags you to change the automatically generated password for that user.
Having an administrator with a generic name is a security risk, so change that, too. Log into your Wordpress dashboard first, then go to Users and click 'Add New'. Fill in the short form with a username, password and email address as prompted. Crucially, in the Role dropdown menu, choose 'Administrator'. Then click the 'Add User' button.
Log out of and then back into your Wordpress blog as the new user. Go to the Users section again, this time choosing 'Authors and Users'. Hover your mouse over the 'admin' user and, when the link appears, click [Delete]. All gone.
Here's another default to alter. When Wordpress installs, it adds the prefix 'wp_' to every table it creates in its database. At the time of installation, you have the opportunity to change that. So, if you're installing a fresh version of Wordpress, change 'wp_' to xcw_ or ff134d_ (or anything but 'wp_'). This will slow down script kiddies intent on SQL injection attacks tailored to Wordpress.
You can still change the table names if you've already installed. Go to Plug-ins in your Dashboard and choose 'Add New'. Search for WP Security Scan and install it – this can change your database prefix. Go to the new Security entry in your navigation bar and choose 'Database'. Enter any new prefix and click 'Start Renaming'.
Password protection
The wp-admin folder in general could be a target for hackers. You need to retain access to it though, so the only solution is to password protect it. Most hosting plans come with a control panel to manage your server, usually cPanel. Consult your web host's documentation and connect to your cPanel if it's available.
You should find an item labelled 'Password Protect Directories' in the Security section. You'll now be prompted to select a folder. Select 'wp-admin' and then create a password and username.
To manually protect the wp-admin folder, start by creating a plain text file called .htpasswd. This file should contain one line, with a username and password pair, something like the following cryptic string:
userbloke:$apr1$.3Bsf/..$TqAKc.jcPn2Ko.d5pLIv
The password here has been encrypted (the real password is 'meatballs'). You can encrypt passwords for use in .htpasswd files with an online tool such as the one at www.htaccesstools.com/htpasswd-generator.

The next step is to upload the .htpasswd file to your server. If possible you should place it in a folder above the root folder of your site. If that's not possible, put it in a folder that's parallel to your root folder.
To password-protect the wp-admin folder now, create a new .htaccess file containing the following content:
AuthType Basic AuthName "Authorised Users Only" AuthUserFile /path to/.htpasswd Require valid-user
The path in line three should be the full server pathway to the file .htpasswd. When you've created the file it should be placed in the wp-admin folder on your server.
Protect your plug-ins
Your Plug-ins folder is vulnerable to exploits. This is particularly the case if the server configuration leaves it open to being listed. There are two fixes for this.
First, an .htaccess file can be used again. Create a plain text file and add the line Options-Indexes. Upload this to your '/wp-content/plug-ins' folder and rename to .htaccess. The second method is even easier.
Create a blank text file named index.html and upload that to your Plug-ins folder. Result? A blank page instead of the directory listing.
Monitor files
A sure sign that your site has been compromised is that files have been changed. We use the plug-in WP Exploit Scanner to generate a list of altered files in our installation. This works well, but it takes a fair amount of time to read through its full output.
Another approach is to monitor files for signs of unauthorised changes. You can do this with another plug-in, Wordpress File Monitor. Both are worth installing and can be found here.
Make frequent database backups
Even with your Wordpress installation as secure as possible, you can't predict with certainty that your site won't ever be compromised. Frequent database backups are advisable.

Recent versions of Wordpress make this easy. Go to your Dashboard, then to the Tools section and choose 'Export'. This enables you to back up all your posts, comments, categories and tags as XML.
To make automatic, regular backups, go to the Plug-ins section, click 'Add New' and search for WP DB Backup. This adds a Backup page to the Tools section that can be used to download your database and select additional tables as MySQL.
Read More ...
In Depth: The 10 most influential computers in history
Some people view the digital revolution as being just a little over 50 years old – but the fact is, today's most powerful computers are the result of decades, centuries and even millennia of development. At the beginning of the story, you could argue, we humans counted with our fingers, and from that clumsy process the Intel Core i7 was eventually born.
To explore the genesis of the computer we set ourselves a task. We wanted to warp back to the very earliest days of computing and track our way through to today. As we walked through the ages, we wanted to identify the most influential computers – machines that shifted the course of computing forever. So, come with us as we visit the most influential computers ever made.
1. The abacus
OK, so the abacus was hardly a computer, but we really can't start our journey anywhere else but here. This ancestor of all mechanised computing aids was first used in Samaria and dates back to before 2,000 BC. A variant is still in use in the Far East today.

In its usual form, the abacus has several rods – each of which represents a power of 10 – onto which beads are threaded so that they're free to slide up and down. If you want to get a hands-on view of how this mechanism assists people with simple arithmetic, take a look at the JavaScript abacus.
While all fell short of what we'd now consider a computer, various arithmetic devices were introduced over the next the two millennia, some of which remained with us until the 20th century.
One abacus descendant is the slide rule. This was an analogue calculating device based on logarithms, and it was famously used by a bunch of boffins in the 1950 BBC election broadcast to calculate the swing as each result came in.
The mechanical adding machine, on the other hand, was a common piece of office equipment until it was replaced by the electronic calculator in the 1970s.
2. Babbage's Difference Engine
An abacus, a slide rule or an adding machine could each be used to perform a single calculation. Babbage's Difference Engine was quite different. It was intended to perform a series of calculations.
Designed between 1847 and 1849, it was never actually built in Babbage's lifetime. In 1991, however, the London Science Museum built a model to Babbage's original plans. It worked perfectly.

Although it was a purely mechanical machine, driven by a crank handle and containing cogs, gears and levers, it accurately calculated and printed tables of polynomials that were used for astronomy and ballistics.
Next came Babbage's steampowered Analytical Engine. Unlike the Difference Engine, which was designed to perform a particular type of computation, the Analytical Engine was a programmable or universal computer in just the same way as today's PCs. Indeed, programs written for it by Babbage's contemporary Ada Lovelace bear an uncanny similarity to modern computer programs.
Add all this up and you could argue that the Analytic Engine represents a more significant step than the Difference Engine did. The problem was that the Analytic Engine was never built by anybody, and so the machine remains largely untested.
3. Colossus
The first completely electronic computer Like Babbage's Analytical Engine (which is best described as a calculating machine), Colossus was a proper computer, albeit one that was designed to perform one very specific type of calculation.
Where it broke new ground was that, for the first time, it was purely electronic. Created by Tommy Flowers and his team at the Post Office Research Station in 1944, it was used at Bletchley Park as part of the World War II code-breaking effort.

While a mechanical computer called the Bombe had been adequate to crack messages encrypted using the famous Enigma machine, the more complicated German Lorenz cipher machine that was used to encode teleprinter traffic required the increased speed of an electronic computer in order to break the code.
Colossus contained no less than 2,400 valves. With memory being expensive, the data was operated on directly from paper tape. As a result, the speed of the computer depended on the speed of the tape reader.
Operating at 40 feet per second (27.3mph), Colossus had a speed of 5,000 characters per second. A rebuilt Colossus is now on show at the National Museum of Computing at Bletchley Park.
4. ENIAC
Designed and built at the University of Pennsylvania under a US government contract, and intended for nuclear weapons research, ENIAC became the world's first 'universal' electronic computer: in other words, one designed to do any job according to its programming.

It was completed in 1946, and its headline figures are startling. It contained 17,468 valves, 7,200 diodes, 1,500 relays, 70,000 resistors, and 10,000 capacitors, all held together by about five million hand-soldered joints. It weighed 27 tonnes, measured 2.6m x 0.9m x 26m, and consumed 150kW of electrical power. When translated into today's terms, it set Uncle Sam back about $6million – and that's not including the power bill!
Despite being the first universal computer, ENIAC differed in several important respects from its various successors. For a start, it operated on decimal rather than binary arithmetic, something that contributed massively to the valve count – which was huge, given its rather pedestrian performance.
More significantly, despite being universal in nature, ENIAC was programmed by plugging patch leads and configuring switches. As a result, changing the machine's configuration from one operation to another was a task that would typically take several days to complete – a far cry from the simple speed of loading a program from disk that we're used to today.
5. The Manchester Baby
The Small Scale Experimental Machine (SSEM), or Manchester Baby, was completed in 1948. It was dubbed Baby in an effort to differentiate the machine from its successor, the Manchester MK1.

The SSEM was groundbreaking. Here was a computer that was fully electronic, truly universal and, for the first time, could execute a program stored in internal memory. As it was the first ever stored program computer, we are able to draw some direct comparisons between the Manchester Baby and today's PCs.
It had 550 valves (transistors, integrated circuits and microprocessors were still some way off), and just seven instructions, which could be executed at a rate of 700 per second. It had 32 words of 32-bit memory.
Although you'd have to put several zeroes on the end of these figures to come close to describing today's PCs, it's fair to say that the world owes a debt of gratitude to SSEM's creators Freddie Williams and Tom Kilburn of the University of Manchester.
6. IBM System/360
Even by the early '60s, a dozen years after the pioneering SSEM, computers were still most definitely for the few rather than the many. IBM, which had entered the computer market back in 1953, was about to change all that.

The IBM System/360 was launched in 1964, and is considered by some to have been the most successful mainframe computer of all time. The System/360 had a 32-bit architecture, something that didn't make its appearance in the PC market for another 21 years. Although few companies could afford to fully populate them, some models could take up to 4MB of memory.
Perhaps the main attraction was that software developed for any model would run on any other, thereby permitting an upgrade path.
IBM soon emerged as head and shoulders above the competition, and went on to dominate the mainframe market for decades. Viewed in the light of today's computers, a System/360 would appear huge. Each component – the CPU, the disk drives, several tape drives, line printers and an operator's console – was housed in its own cabinet, which meant that a system would occupy a whole room, which would need to be both fairly large and well air-conditioned.
7. DEC PDP-8
Despite the success of the IBM System/360, mainframe computers still remained the sole domain of moneyed government departments, universities and large corporations. Generally leased rather than bought, these machines could set their owners back a million dollars a year.

The computers would also need a team of operators to care for the machines, further pushing up costs. By the early 1960s, the race was on to downsize the computer and, in so doing, make computing accessible to smaller organisations.
While other companies may have created mini-computers first, the first firm to really break into this embryonic market was Digital Equipment Corporation (DEC). Introduced in 1965, DEC's PDP-8 was the first mini-computer to sell in significant numbers. It was sold for a fraction of the price of even the smallest IBM System/360 mainframe.
The CPU was about the size of one of today's large PCs, and when the storage and other peripherals were added, the whole computer was about the size of a domestic fridge. Most importantly, it could be operated by the people who needed to use it. It never sold as many as its successor – the hugely popular PDP-11, which was launched in 1970 – but that doesn't change the fact that the PDP-8 got there first.
8. IBM PC
Such was the magnificence of the PDP-8 and the mini-computer era it pioneered that it took a whole 16 years for the next true hero of computing to come along. There were some noteworthy efforts in the interim – the Apple II and the Commodore PET – but they were all overshadowed by the IBM 5150.

This pioneering machine was launched in 1981 and it kick-started the desktop PC revolution. Indeed, your desktop PC of today is very much its direct descendant.
At the time, commercial success wasn't exactly assured, because the 5150 was massively expensive. The original 1981 PC sold for $1,565, which would be the equivalent of $3,900 (or £2,600) today!
Despite these wallet-wilting features, the machine had a very sparse specification. For example, it didn't come with a monitor: you had to use a TV. It had 16kB of memory and, as hard drives were squarely a thing of the future, you had to make do with floppy drives. Even these were an optional extra – IBM intended 5150 machines to store data on cassette tapes.
9. Sinclair ZX81
If imitation is the sincerest form of flattery then IBM must have been over the moon at the appearance of the many clones of its PC that soon flooded the market. Although the following wave of lookalikes succeeded in forcing prices down, back in the 1980s the PC was still most definitely a tool for businesses only.

The machine that changed all that, at least here in the UK, was the Sinclair ZX81, which is still remembered for its super low price. On its launch it took the country by storm. It might have looked more like an overgrown calculator with its primitive membrane keyboard, but it cost just £69.95 (or £49.95 if you were prepared to solder the components onto the circuit board yourself).
Sinclair managed to keep the ZX81's price low by reducing the number of chips on the motherboard from 21 on the ZX80 to four on the ZX81.
Needless to say, the ZX81 didn't pack the same sort of punch as the IBM PC. It had an 8-bit Z80 processor with a clock speed of 3.25MHz, 1kB of RAM, and featured monochrome output to a TV set. The display comprised 24 lines of text, each 32 characters long or, in block graphics mode, it provided a resolution of 64 x 48 pixels. Oh, and you had to use a cassette recorder for storage.
10. Apple Mac
The computer that brought us a GUI and mouse No, you haven't opened a copy of MacFormat by accident: even a dyed-in-the-wool PC user would have to admit that the Apple Macintosh was groundbreaking when it first appeared back in 1984.

Today its all-in-one appearance looks somewhat quaint, but it had one very important thing going for it that wouldn't emerge into the PC world for another eight years. When PCs were still driven by entering cryptic commands at a prompt on a text display, Apple Mac users were clicking graphical icons on-screen and having information presented to them in windows.
Here was the first mass-produced computer to be shipped, as standard, with a graphical user interface and a mouse – and the computer industry has never looked back. While touchscreens and voice input have been hyped as the next breakthrough in user interfaces, this concept is alive and well over a quarter of a century on.
Read More ...
Review: EVGA Superclocked GeForce GTX 465
At the beginning there was the word and the word was Fermi, and Jen-Hsun saw that it was good. Unfortunately he also saw that yields for the good Fermi chip weren't too hot, and lo, the GTX 480 was born with a slightly sliced GPU. The GTX 480 begat the further sliced GTX 470 and the GTX 470 begat the even more cutdown GTX 465. So here it is, the last revision of the first Nvidia DirectX 11 graphics chip, the GF100 GPU.
Like the GTX 470 before it this is a card specifically designed to go after a competing AMD card head-to-head. With the GTX 470 it was priced up against the HD 5870, and for the most part suffered by comparison once AMD sorted its drivers out.
The GTX 465 then is aimed squarely at HD 5850's territory, and it's going for the jugular.
Same old, same old
At its heart is the same huge 529mm2 GF100 GPU, albeit an even more heavily cut-down version than that in the GTX 470. The knife wielded by Nvidia to hack the GTX 470 into a more cost-effective Fermi card made some smart cuts to the expensive hardware.
You lost only a single streaming processor, which meant you only lost 32 processing cores. The memory bus was cut down too, from 384-bit to 320-bit for its 1,280MB of GDDR5. All this meant that while it was significantly slower than its big brother, the GTX 480, it was still able to keep pace with AMD's HD 5870, even if it was lagging slightly behind in most metrics.
With the GTX 465, Nvidia has been a little more heavy-handed with its karate chop action. A further three streaming processors have been cut out of the GF100 GPU, which adds up to a loss of 96 of Nvidia's CUDA cores, and again the memory bus has been slashed, this time to 256-bit for the 1GB of GDDR5 sitting on the new PCB.
Clockspeeds across the board have been slashed, too. Compared with the GTX 480 this is a massive drop in architectural goodness, as well as a fair drop in performance. However, when you compare that with the HD 5850, spec-wise things aren't looking too bad.
The HD 5850 is running on a 256-bit memory bus with 1GB of GDDR5 too and houses the same 32 ROPs that now sit in the cut-down GPU powering the GTX 465. Of course, the relative architectures of the two graphics cards are very different, and so are the clockspeeds, so in specification terms it's still a bit of an apples and oranges comparison.
Much like the GTX 470 before it, the stock GTX 465 struggles in most performance metrics to keep the pace with the competing HD 5850 card. At the lower end of the resolution spectrum they're just about level, but as you aim for the high end the performance of the GTX 465 drops off in comparison. They're both running the same 1GB graphics memory and a 256-bit memory bus, but the HD 5850's higher clockspeeds means that it wins in the price-point battle.
Really, really clocked
Which is surely one of the reasons EVGA has come out rather quickly with the Superclocked edition of its GTX 465 card.
This isn't even the top end of its range, with the Super Superclocked and FTW editions (I'm not making this up) besting this version. Essentially this edition contains a GPU specially selected for the Superclocked treatement.
These GPUs are more capable than most to handle a decent overclock. The really interesting thing, though, is the amount of headroom this card has in terms of overclocking ability.
By simply upping the core voltage to 1v using the EVGA Precision overclocking tool, we were able to significantly up the clockspeeds across the board, even on the memory side.
The HD 5850, then, doesn't stand a chance against a carefully overclocked GTX 465. Even at the high end of resolutions, where previously it had been relatively weak, it now stands comfortably above the competing cards.
In fact with the clockspeed and voltage tweaks, the Superclocked GTX 465 is actually rather close to the GTX 470 in performance terms. The card only drops a few frames per second on even the most demanding of titles at the top res of 2,560 x 1,600.
This overclock also has the added benefit of actually making the GTX 465 fairly competitive with AMD's top single-GPU card, the stock HD 5870. In fact in the tessellation-heavy Heaven 2.0 benchmark the GTX 465 bests the HD 5870 even without the hefty overclock. In most benchmarks, though, the HD 5870 still holds firm, keeping the newer Fermi-based cards at bay.
All is not as rosy as it first seems, though, and inevitably that comes directly from the pricing of this new card. The stock GTX 465 comes in around the £230 mark, making it directly competitive with AMD's HD 5850, and noticeably cheaper than the GTX 470, even with the price-drop bringing it down under the £300 mark.
But that's the stock card: this bin-picked GTX 465 comes in at £270, and that makes it uncomfortably close to the GTX 470. Then we inevitably come to the GTX 460. This brand new card with a brand new Fermi GPU has effectively retired the GTX 465 before it's had a chance to do anything.
The new cards are priced significantly lower than the GTX 465 and the 1GB version comprehensively bests it in practically all performance metrics out of the box. Even when they're both given some extra overclocking lovin' too.
Too much, too close
Unfortunately the £40 premium for this Superclocked edition over the standard version makes it way too close in price to the superior GTX 470. And the new GTX 460 coming in at the £200 mark for a sometimes faster card makes it a no-brainer.
EVGA must be cursing the recent drop in price of the GTX 470 (especially considering its Superclocked edition of that card is coming in at over £340, giving it a £70 premium). Because without it this sizzling little card would have garnered the plaudits it deserves.
And with Nvidia effectively retiring this spin of the first Fermi GPU a month after its inception, there are going to be a lot of unsold GTX 465s heading back to Nvidia. The GTX 465 is a stop-gap card and its time has passed already.
It's a shame as it overclocks brilliantly, even with the stock cooling solution, and manages to keep pace with more advanced cards. Unfortunately it's just that little bit too expensive compared to better, cheaper offerings to really be a card that I could in all good conscience recommend anyone picks up.
Related Links
Read More ...
Opinion: Linux is winning
Linux doesn't have a CEO. Consequently, there's no annual keynote hosted by a charismatic alpha male. But if it did, and if there were a conference covering the first half of this year, the first speech would start with three words: "Linux is winning". Firstly, a market research firm in the US called The NPD Group revealed that sales of Google's Android platform overtook those of Apple's iPhone in the first quarter of 2010, propelling itself into second place behind the waning RIM.
Android is becoming increasingly competitive, spanning both the smartphone and the emerging tablet markets, with devices from Dell and Archos already available. This might be why Apple started a patent infringement lawsuit against HTC, using many of its Android-based phones as physical exhibits in its litigation.
Secondly, Google announced its intention to open source the VP8 video codec. This was acquired when it bought On2 earlier in the year and it will be used alongside Vorbis and the MKV container to create Google's WebM video format. This is vitally important for Linux.
The nascent H.264 format, as used by Apple and many HTML5 video streams, is encumbered by patents, and current open-source implementations live under the shadow of legislation. VP8 and WebM have the potential to match it for quality, and while WebM will undoubtedly attract similar litigious trouble, having an umbrella the size of Google should satisfy many Linux distributions, especially when Mozilla, Opera and Adobe have already pledged their support.
Programme for Government
Finally, the UK's new coalition government has published its Programme for Government. There are two points in the section on Transparency that are great news for free software. One states, "We will create a level playing field for open-source software," while the other adds, "We will ensure that all data published by public bodies is published in an open and standardised format, so that it can be used easily and with minimal cost by third parties."
If these promises come true, it will transform attitudes to open-source software and Linux, and hopefully open the door for its use within government and schools, two areas where it's ideal.
Many of us used to think that for Linux to be judged a success, it had to be installed and running on more desktop computers than Microsoft Windows. And there are great swathes of Linux users who still feel the same way. But the world of computing has changed.
There's more than one way of judging the success of something that started as just a good idea. Windows, Linux and OS X are survivors. They've lasted this long because they exist within their own ecosystems.
Linux, for example, is fed by a curious mixture of enterprise investment, embedded hardware vendors and a community brimming full of zealous commitment. There's a low-cost threshold to entry and a subsystem that maintains itself with very little investment. It's these factors that have shaped how it looks, how it feels and how it's operated.
The ecosystems inhabited by both Microsoft and Apple are equally well-adapted to their environments. The former is the domain of the utilitarians, offering straight functionality for an up-front price. The latter is an increasingly important fusion of fashion and function. But things have changed.
The borders between the ecosystems have become indistinct. Apple has surpassed Microsoft in market value, winning thousands of new fans through it's no-fuss interfaces and lower prices. There's a shift in the balance of power.
Less free and open
And thanks to Google, Linux is becoming less free and less open, proving that in the new markets where it's having the most commercial success, it's becoming more like Apple. ROMs are encrypted and need to be rooted for user-hacking, third-party applications have to be sold through a single vendor and personal information is held in the cloud by a sole provider.
If Linux wants a taste of similar success, it might find it if it makes similar concessions to a user's freedom.
But then we'd have failed. The Linux ecosystem would have become too polluted, bogged down by sponsored kernel additions, paid-for support and short life cycles. It may be a commercial success, but no longer an active one.
Our hypothetical CEO might make further compromises, and make judgements against the interest of Linux users. Which is exactly why we don't have a CEO, and exactly why the success of open-source software is so difficult to judge using the same language as its competitors.
Read More ...


No comments:
Post a Comment