
iOS 4.3.1 on its way to end your iPad 2 jailbreaking fun
Apple is currently readying iOS 4.3.1 for release in the next couple of weeks, if information sent to one tech site is to be believed.iOS 4.3 only went live earlier this month, but Apple is looking keen to stamp out any iPad 2 hacking action.
Sources told BGR that the minor update will include a number of fixes, including closing the door on iPad 2 jailbreaking 'vulnerability'.
Tweaks
Other tweaks include fixing issues with third party apps recognising the gyroscope on the iPad 2, a couple of memory issues and baseband updates for the iPhone 3GS and original iPad.
It's looking unlikely that the update will hit before the UK gets its hands on the iPad 2, set to launch on 25 March (Friday).
No doubt most of us will be happy to have the new tablet safely in our possession before we start complaining about minor problems, though.
Read More ...
Ford to put DAB radios in all its UK cars by 2012
Ford has announced that it will be integrating DAB technology into all of its car ranges by 2012, a full three years ahead of the radio digital switchover in 2015 and one year ahead of the proposed plan for the car industry.The new Ford Focus comes equipped with a DAB radio but the company will make efforts to equip all of its cars with a digital radio over the next two years.
Looming issue
Nigel Sharp, managing director of Ford of Great Britain, said about the plans: "[The digital switchover] is a looming issue, and we want to be well placed. The fact is that the Focus's radio is future-proofed now, whereas those in our competitors' cars aren't.
"The plan is to extend that across every Ford model in the next 18 months or so. There are technical issues to overcome because, for instance, traffic alerts are broadcast only in FM at the moment, but we are confident we can achieve our goal."
Getting rid of FM radios and replacing them with DAB is said to be energy efficient, with the digital signal using 7 per cent less electricity than an analogue one.
Plus you get to listen to 6Music, which has got to be a bonus.
Read More ...
New Xbox 2015 release date sort-of confirmed by designer
A designer who has worked with Microsoft has implied that the next generation Xbox is slated for a 2015 release. In his online portfolio, Ben Peterson posted scant details of a collaboration with Microsoft's Interactive Entertainment Business design group.
Alongside an angled image that doesn't give much away, he wrote:
"Microsoft Xbox. Confidential / Collaboration with Microsoft's IEB design group investigating future user experiences and hardware for 2015. *Work samples only permissible in person.* (March 2011)."
The mysterious Ben Peterson
Question marks hang over such information; who is Ben Peterson? Didn't he sign an NDA when working with Microsoft? Why is there no contact information on his online portfolio? Will he be allowed to live after such a faux-pas? Or is this all a traffic-mongering lie?
It's not the first time that 2015 has been mooted as the year of the new Xbox – Microsoft has already said that it sees the Kinect peripheral extending the life of the already-aged Xbox 360 through to 2015.
Perhaps – just perhaps – the company is putting those four years to good use and working on something incredible that will blow us out of the water in 2015 – after all, it islooking for hardware engineers for the Xbox platform and Ben Peterson certainly makes it sound as though R&D is underway.
We guess we'll find out for sure in 2015.
Read More ...
Best Buy: 48% think 3D is too expensive
Best Buy has been looking into the UK's perception of 3D ahead of the launch of the Nintendo 3DS, and has found a mixed response to what Britain knows and thinks about the technology.Out of the 2,000 people surveyed, Best Buy found that 48 per cent still think that the technology is too expensive, but at least 70 per cent have had experience with the technology and enjoy using it.
Most of this experience is not in the home, however, with less than one in five (16 per cent) having watched 3D on a TV at home or at someone else's house.
3D experience
"Advances in technology can make our lives easier, more fun, more productive and better connected," said Rob Wilkins, Head of Home Theatre and Entertainment at Best Buy UK.
"We want to demonstrate to customers that everyone can get the most out of their technology to live a fuller, richer, faster, digital life."
Best Buy is hoping that it can change the perception of 3D being too expensive, notes Wilkins: "With the popularity of big blockbusters such as Avatar, people may believe that to achieve an effective 3D experience they need to spend thousands of pounds on 3D TVs and equipment.
"We have looked at all of the entertainment products we offer and actually found that prices are very similar to HD TVs, we encourage consumers to take this on board when thinking of buying a new TV so that they get the most out of their technology investment."
The Nintendo 3DS UK release date is 25 March.
Read More ...
In Depth: Tips and tweaks to upgrade your laptop for gaming
Time then to lift the lid on your laptop and delve into its inner sanctum. The juicy technology sandwiched between its chassis is simply ripe for tinkering, and while manufacturers say you shouldn't, we're going to look at how opening up the case can reap rich performance rewards. Figures show that we've all been buying more laptops than desktops over the last few years and it's predicted that desktop sales will remain flat while laptop sales continue to post double-digit increases. In other words, we'll all be buying laptops while upgrading the one main desktop at home.
So it's time we took a close look at how you can overhaul and game on a tired old laptop.
The biggest single disappointment when attempting a laptop upgrade is the steadfast, single-minded blocking the industry and manufacturers put in your way as the owner and user of that laptop.
They might use industry standard components and connections but that doesn't stop them creating parts that simply cannot be removed, or BIOS locks that construct a virtual Stasi, imprisoning your device and only allowing it to work with permissible components.
While this might sound like the ravings of a cirrhosis sufferer at a beer festival, knowing if a laptop can have its parts upgraded, if at all, is not only useful for the laptop you already own, but can help form future purchases. Choosing a laptop that you know can be upgraded at a later date is invaluable because it'll extend its useful life.
Hacked drivers
Starting off with a few useful software tweaks is the first step; using hacked drivers can help squeeze more from the hardware. Options to add memory and a fresh hard drive can hugely increase performance as well, as the base installed options can be poor.
The more exotic processor and graphics upgrade routes are substantially more complex, but for many it's a clear-cut case of you can or you can't. Even here, there are other alternatives that can get even the lowest-end options on the gaming platform, as we'll see…
Is upgrading a laptop something you should seriously consider? Well, it's not impossible to do but by the time you've scraped out the inside of your wallet for upgrades you could very well have enough to spend on a similar or better performing new laptop.
Take a look at the Acer Aspire below. It's an example of what آ£500 will buy; an entry-level gaming laptop. If the cost of your upgrade comes close to or is more than this, a new laptop is probably a better option. (You could level the same accusation at desktops but with laptops you're more constrained to the upgrade options and some manufacturers actively block upgrade routes by locking the BIOS down).

Negativity aside, though, there's no reason you can't drop in a new mobile processor that provides a bump in clock speed.
Start by searching online to see if anyone has managed to upgrade the CPU. This will effectively tell you the single most critical point: is the processor soldered or socketed?
If it's soldered that's a deal breaker (some people, who probably like playing with liquid nitrogen, mention hot-air soldering but it's another layer of complexity and expense on an already complex and expensive procedure).
Under the hood
If your search turns up good news, you need to get a little intimate with your laptop and tease out of it the model of its processor. The best generic tool for this would be CPU-Z; AMD and Intel also supply their own processor ID tools. Grab the AMD OverDrive tool from here or the Intel Processor Identification Utility from here.
These should be able to inform you of the processor model, speed, voltage, socket and stepping. Take this back to Intel or AMD and look up the processors in the same family line, so Core 2 Duo, Pentium or Turion.
To upgrade you'll need a list of processors based on the same socket, same voltage range and within the same thermal profile, which is the power dissipation. The last point is important, because your laptop's thermal module will be tuned for a specific heat output; swapping in a processor with a much higher thermal rating could lead to it shutting down or being throttled.
Even with all of this gathered, the laptop's BIOS may simply not recognise the new processor, either refusing to boot or running it at a lower spec. You can do yourself a favour at this point by doing some more research and searching for your laptop model online.
If you can find the processors it supported on release then you should have a better idea of what range of processor speeds and models it should support. It would also be wise to update the BIOS to the latest version, to make sure that it has the latest CPU ID information in place.
We've gone as far as we can without breaking anything or spending money. Before you start, disconnect the laptop from the power supply and remove the battery. At this point we need to at least locate (and establish we can remove) the old processor. Ideally, a large service panel on the back of the laptop – not used for the hard drive or memory – will provide access to the thermal module and internal components.
Hopefully you can see a socket and fixing screw without the need to remove this. However, depending on the design you may have to, along with the discrete graphics unit.
Remove keyboard
The alternative laptop design will require you to remove the keyboard and access the thermal module from the top. Typically you'll need to remove a service cover from around the screen hinges, remove fixing screws you find here, and unclick the keyboard. It's also likely you'll need to disconnect the display data cable and power to get full access.
Actually installing the processor is very similar to doing the same thing in a desktop unit, as the processor is a socket design but instead of the standard ZIF lever it's usually a screw. Just make sure you align the processor keying arrow with the same arrow on the socket.
You'll need to add the usual pea of thermal paste before replacing the thermal module. Rebuild the laptop and you'll be ready to restart it.
Here's how to upgrade and enhance your laptop's pixel performance
The Achilles heel of portable gaming is the lack of any realistic graphics upgrade route. That's not to say there are no options available to you – but all of them have their own pros and cons.
The problems start with the utterly feeble abilities of laptop integrated graphics. The drive to cut costs not only cuts any possibility of an upgrade path but also the starting performance. For an older laptop struggling with integrated graphics, the easiest solution for gaming is to know your limits and simply explore games that are suitable for its abilities.
Before you scream "Cop out!' it is genuinely useful to know the limitations of integrated graphics, while the PC has a vast and varied back catalogue containing archetypal and genre-forming games, many now rereleased on digital download.
The two main generations of Intel integrated graphics (IGPs) are based on the GMA 9x0 (this includes the 3x00 line) and the GMA X3100/4500HD. The distinction between these is important because Intel did a major overhaul for the GMA X3100 architecture that remains right into Sandy Bridge – but we'll get to that shortly.
The GMA 900 and 950 are found in the mobile 915 and 945 chipset: confusingly for N4x0 netbooks, it was re-branded as the GMA 3150 but shouldn't be confused with the GMA X3100. It's an incredibly weak IGP for a couple of reasons.
Firstly it has no actual Transform & Lighting or Vertex Shader hardware: it's all emulated in software on the processor. Cleverly, Intel in later hardware will spin this as enabling the driver to choose to use the CPU or IGP to process vertex instructions for greater efficiency.
It also uses a 'Zone Rendering Technology', which sounds suspiciously like PowerVR to us and could also explain the horrible lack of optimisation for it. Despite all of this, it still supports DirectX 9 and Shader Model 2.0.
Thankfully, for its next generation graphics Intel got with the programme. The GMA X3100 and onwards use a unified shader model with single Execution Units that handle all operations. The X3100 line (in the 96x chipset) has eight of these, the 4500HD (the GL/S/M4x chipset) has 10, while Arrandale (Core i3/5/7) has 12, all sporting DirectX 10 and Shader Model 4.0 support.
Additionally, Intel in recent years has been pushing for developers to optimise games for its IGPs, which can certainly help them reach playable levels. One example was with STALKER seeing a threefold increase in frame rate when switched from a hardware vertex routine to a fully optimised SSE2 implementation. It's an admission that its own Vertex hardware is weak, but running those routines on the processor frees up the execution units to do the Pixel Shader work, making the best of a bad situation.
Benchmarking
But what does all of this mean when it comes to playing games? This is where the benchmarking comes in.
We've taken a selection of games that represent different generations of DirectX development and benchmarked these against standard Intel hardware. The somewhat dumfounding truth is that the GMA 900 IGP seems about as powerful as the GeForce 2 GTS released over a decade ago. Or to put it another way, it'll play the original Unreal Tournament and similar DirectX 8 games. Its lack of T&L and Vertex hardware, coupled with a weak processor, kills all performance.
The more accomplished GMA 4500HD provides a good deal more graphics hardware with real unified shaders and sure enough it's just about capable of playing early DirectX 9 games with three to four times the power of the previous GMA 900 and seems the equivalent of the higher-end GeForce 4, released in 2002.
Finally, Arrandale doubles the base DirectX 9 performance and is on-die, boosting memory bandwidth and putting in a performance similar to the GeForce 5 or Radeon 9800 from 2003. All of this bodes well, with Sandy Bridge using the same 12 Execution Units but increasing the maximum clock rate by a substantial amount and opening access to the L3 cache, making existing games playable at HD resolutions.
So you've tried everything possible with your existing hardware or there's a game you want to play that it simply can't manage. What solutions are left?
There are two avenues you can pursue and neither is perfect or guaranteed. The first one involves using an external PCI-e adaptor and PCI-e graphics card to replace the existing hardware. Available from www.hwtools.net, the PE4H costs less than آ£65, including shipping and enables you to plug in and use an external PCI graphics card using an Express Card slot on the laptop.

This does work on a wide range of laptops, the biggest problems being overall performance and Express Card implementation. With a basic 1x Express Card slot, the best performance you can get is around 50 per cent that of the card. If you're lucky enough to have a 2x Express Card laptop this will increase to around 75 per cent, making this a viable if not fully portable solution.
Streaming solution
Nothing in this world is going to help a netbook get CoD: Black Ops up and running. But then, as we've seen, that netbook is going to struggle playing the original Unreal. To get sad cases like this onto the 3D playing field we're going to have to use a little lateral thinking; get something else to do all the rendering.
One option is to use a streaming service. The highest profile is Onlive.com, which has a wide selection of games to choose from. Many of these offer a 30-minute free trial along with three-and five-day passes.
Gaikai.com is a relatively new one that's going through beta testing at the moment. Using their own servers you just need to provide the broadband connection to connect to the server to get gaming.
There's a reasonable DIY option available from streammygame.com. This clever system uses your desktop PC to do all the 3D donkey work and streams it over your local network. The free version is limited to 640 x 480 and your local network. A paid version costs just $9.99 a year and enables streaming over the internet and with 1,280 x 720 resolutions. It requires you to create an account and install a server on the main gaming PC.
Playing a game on the laptop is a case of opening the web page and choosing which game you want to stream. This fires up the game on the server PC and away you go. We found the streaming worked well with only a scant amount of lag but it's certainly not as well supported as it could be – with a little more polish it would certainly be a winning solution.
It's an oft-overlooked option but upgrading a laptop hard drive will reap greater performance rewards than you might think. In fact, it could be the solution to two big problems we have with using laptops.
The first of these problems is that laptops only have one drive bay and using external storage means you've got to get up out of your comfy, warm chair (the one with the perfect buttprint) just because you've left the USB cable upstairs. The second problem is that once you start running out of space you can find the drive is moving about as fast as a pensioner browsing a supermarket meat counter.
At this point, the thought of a drive upgrade might fire in those neurons. Most laptops ship with what we might describe as a 'British Rail' class of drive, when you're after 'Deutsche Bahn'-style service.
Smaller capacity 5,400rpm drives are never going to perform anywhere near as well as the upgrade alternatives. The most obvious pick would be an SSD: 64GB models are now available under the magic آ£100 mark, with 128GB versions under آ£150.

While we're not here to extol the virtues or pitfalls of these devices, we are here to try and see where best you'd be spending your hard-earned, recession-weary money. Would a faster but lower-capacity 7,200rpm drive be ideal? How about a larger but slower 5,400rpm drive? Or as a further option, should you choose a hybrid drive that packs flash-memory for the best of both worlds?
All of these spinning disk options come in under the آ£90 mark, making the question of which is best for you a tough one to answer. Even if you already know you want capacity, thus ruling out the SSD option, two drives here are 500GB and the Toshiba is a massive 1TB.
With 2.5-inch drives, simply opting for a larger capacity drive will increase performance, as the 'areal data density' of the platter means more data can be read and written per second on a disk that spins at the same speed.
That neatly brings in the second way of boosting drive performance; just make it spin faster. Unfortunately that introduces its own set of problems.
Firstly, reading the magnetic field at that speed becomes increasingly difficult, so you need to start reducing the data density for reliability.
Getting warmer
Increasing speeds to 10,000 or 15,000rpm is possible but this introduces heat and these 2.5-inch devices are designed only for servers that can provide adequate cooling. So are the options a slower, more efficient but larger 5,400rpm drive or a faster, less efficient but lower capacity 7,200rpm drive?
Well, there is a third way that drives can help improve performance: by utilising a data cache. Desktop drives can get away with 32MB and 64MB-sized cache while the physically far smaller 2.5-inch drives top out at 16MB, or more commonly 8MB, which is a fraction of the total capacity size but can help smooth out performance on more complex write scenarios.
This does also overlap with the option of hybrid devices, of which we have the Seagate Momentus XT.
Read More ...
Tutorial: Add tabs to the OS X Finder and find files more easily
There have been many attempts over the years to expand on Finder, which some Mac users consider too basic or just too awkward to use when many windows are involved. Cocoatech's Path Finder is the most famous example, offering tabs, dual-pane file-browsing and additional list sorting options; however, Path Finder is essentially a standalone application, and so if you use it you'll find yourself switching between it and Finder proper.
TotalFinder isn't nearly as advanced as Path Finder, but it takes the most important components and brings them to Apple's native Finder minus any extraneous interfaces.
Tabs, TotalFinder's dual-pane mode and the system-wide Visor are explored in the walkthrough below, so here we'll cover some of TotalFinder's smaller features.
Most of these are easily accessible by going to Finder > Preferences, selecting TotalFinder and then clicking the Tweaks tab. Here, you'll see checkboxes and user-definable shortcuts for: showing otherwise invisible system files; Folders on Top, which places folders above files in list views; and Always Maximise, which makes a Finder window full-screen when you click the green 'zoom' button.
There's also a Freelance Windows checkbox. When checked, this retains Finder's default behaviour of opening a folder in a new window if it's Command-clicked; if you don't check this, TotalFinder opens Command-clicked folders in new tabs.
The tutorial starts by assuming you've installed TotalFinder and have launched the application. On doing this, Finder will restart, so TotalFinder can integrate itself into Apple's file browser. For more information about TotalFinder, including any possible future developments, visit http://totalfinder.binaryage.com.
Note that if you work through the tutorial and decide that TotalFinder's not for you, click TotalFinder's menu extra and select Restart Finder to revert to Finder or Uninstall TotalFinder to remove the application entirely. Of course, you could always read our review of the latest version of TotalFinder before you install it – simply turn to page 100.
How to master TotalFinder
1. Work with tabs

When TotalFinder is running, new Finder windows start life with a single tab, which replaces the title bar. The tabs are akin to those in Google Chrome, so click the + icon (or use Command+T) to get a new tab, click أ— to close a tab, and note that each tab displays the folder's icon.
2. Manage your tabs

Tabs are managed like tabs in modern web browsers. You can click-hold and drag left or right to reposition a tab within a window, or tear one free to create a new window once you let go of the mouse button. You can also drag tabs between Finder windows.
3. Drag content to tabs

TotalFinder makes it simple to move content between tabs. Click-hold a file and drag it over a tab. When the arrow appears, let go of the mouse button and the file will be moved. Alternatively, pause for a second to switch to the destination tab and go further in the folder.
4. Activate dual-pane view

Select View > Toggle Dual Mode (or press Command+U) to activate dual-pane mode. If you've only one tab open, it will be cloned; if you've more than one tab, your selected tab will join to the one to its right (if available) or to the one to its left. (Command+U also reverts.)
5. Work in dual-pane mode

When in dual mode, tabs 'link' in the tab bar and are managed as one. A dual-mode's window is split, providing a Finder window instance (including sidebar) on each side. Set both sides to list view to get an efficient file management workspace akin to an FTP client.
6. Activate Visor

Go to Finder > Preferences and select the TotalFinder section. Click the Visor tab and check The Visor Feature. Visor is a system wide Finder window that slides up from the bottom of the screen when you activate it using the shortcut (Command+' by default).
7. Work with Visor

Visor is handy for stashing regularly used folders, which can be accessed with a single keyboard shortcut from any app. When Visor is deactivated (using Command+' or Esc if you check the option in the Visor preferences), the app you were in is automatically refocused.
8. Pin Visor

At the top-left of Visor, you'll see a non-standard blue button. This is not a close button – but, it pins Visor open when clicked (you can also use the shortcut Shift+Command+P) and therefore stops Visor from automatically closing when you switch to another application.
Read More ...
Samsung Galaxy S II Mini outed by leaked slides
Not content with simply revealing potential release dates for the BlackBerry PlayBook and HTC Flyer, Three's leaked slides also out the Samsung Galaxy S II Mini. The 'mini' version of the Samsung Galaxy S II comes with a 3.7-inch screen – not exactly what you'd call small given that it's the same size as the HTC Desire S's display.
The processor isn't exactly tiny either, rocking 1.4Ghz as it does, plus there's Android Gingerbread and front- and rear-facing cameras to play with too.
The latter comes in at 5MP with autofocus and an LED flash as standard.
Prequel not sequel
Three's slide lists the Samsung Galaxy S II Mini release date as April, which could see it putting in an appearance before its erstwhile brother, the Samsung Galaxy S II which some retailers have pushed back to May.
Also getting a look in thanks to The Great Three Slide Leak of 2011 is the Nokia X7, of which we've heard hide nor hair since a sneaky video back in November 2010.
The X6 refresh looks set to land in June running Symbian; which version hasn't been specified but we'd guess Symbian^3.
Read More ...
BlackBerry PlayBook UK release date leaked
Some leaked Three-branded slides suggest that the BlackBerry PlayBook UK release date will be coming in June 2011. RIM, which revealed its first tablet efforts in September, is set to launch the PlayBook in April in the US.
An official UK release date has not been revealed, and Three's slide states that it's the Wi-Fi only model that will be available in June.
Three – it's a magic number
We can also assume that Three will be ranging the RIM tablet, although talks may not yet be finalised if the release date is so far in the future.
The leaked slides, which were sent to Engadget, also list the HTC Flyer for a May release and outs the Samsung Galaxy S II Mini, a variant of the Galaxy S II which is yet to hit the UK's shelves.
A June launch will put the PlayBook a good three months behind Apple's iPad 2, as well as taking on competition from the bevy of Android tablets set to launch between now and then – including the highly-rated Motorola Xoom and HTC Flyer.
Read More ...
WIN! 1 of 15 Antivirus products from G Data
TechRadar has teamed up with G Data to offer you the chance to win one of 15 anti-virus software products.With dangerous new viruses appearing daily and the all-too-real possibility of one striking at any time, a robust and responsive antivirus protection is a necessity to protect your precious digital data.
For over 25 years G Data's antivirus solutions have been keeping computers protected; now you have the opportunity to win one of G Data's award-winning products.
G Data Total Care 2011 offers award-winning virus protection, optimised user guidance and a 'silent' firewall for monitoring all online activities. In addition the software also offers efficient security tuning and the ability to backup data to a wide range of storage media.
Alongside improved award-winning virus protection and optimisation of the intuitive user guidance, G Data InternetSecurity 2011 also protects all online activities by means of a 'silent' firewall. Operating invisibly without any loss of computing power or user-stressing querying, it successfully blocks hackers, viruses and spam.
G Data AntiVirus 2011 offers antivirus, spyware and phishing protection without any loss of computing power. AntiVirus offers users a simple installation experience without compromising protection. Features such as increased scan speeds and reduced memory requirements make this a perfect solution.
All G Data products are updated regularly as new threats develop, always keeping your PC protected.
A 1 year subscription to TotalCare 2011 is worth آ£39.95, while InternetSecurity 2011 is آ£29.95 and AntiVirus 2011 is آ£24.95.
The first five lucky winners will get copies of TotalCare 2011, the second five will get InternetSecurity 2011 and the third five will all win copies of AntiVirus 2011.
To win, answer the following question:
What is the name of G Data's flagship anti-virus product?
- a) TotalDefence 2011
- b) TotalProtection 2011
- c) TotalCare 2011
Read More ...
Alacatel launches Royal Wedding themed phone
The wedding between Kate and Wills is not so much a celebration of the future of our monarchy and the joining in holy matrimony of two young people in love as an opportunity to peddle some 'interesting' memorabilia. Carphone Warehouse and Best Buy have unashamedly hopped aboard the bandwagon with a special edition Royal Wedding mobile phone.
In accordance with the quality and prestige as befits the marrying of the Windsor line with the Middleton clan, the handset itself is an Alcatel One Touch candybar handset.
It's really "nice"
Emblazoned on the front is a Union Jack, while the red back features a very, er, beautiful calligraphic rendering of the couple's initials and wedding date.
What's more, it comes with Mendelssohn's Wedding March as the ringtone, and the wallpaper is a photo of our balding future King and his bride.
Mark Eastham, Commercial Director for The Carphone Warehouse and Best Buy, is pretty excited about the phone, saying:
"We are delighted to offer this fun, limited edition, Royal Wedding – themed phone to our customers. It's great to offer a product which really taps into the national spirit over the coming months, and is still an affordable option for customers".
Great indeed. Yours for آ£15 on pay as you go, the handset will be available from Carphone Warehouse and Best Buy.
Well, that's TechRadar's wedding gift sorted anyway.
Read More ...
Explained: How your operating system works
When we use a PC, we're usually only concerned with the program we're currently using, whether it's a browser, a word processor or our preferred social networking app. We don't often think about the rather extensive program underlying everything that happens on the PC: its operating system. We can fire off the name of our chosen OS at the drop of a hat (Windows 7, Chromium or Ubuntu, for example), but could we say just what it is and what it does?
When we think of an operating system at all, it's usually to help define what our PC is - 2.66GHz Core 2 Duo, 4GB RAM, 256GB SSD, Windows 7, for example. It's almost like a physical peripheral; something we could swap for a similar substitute in the future.
And until now, the operating system defined a PC just like the hardware. We've grown up with large, all-encompassing operating systems that provided a wealth of services, but that might be on the cusp of changing.
Let's start by considering what a PC is. It's a collection of hardware: the motherboard, CPU, memory chips, video adaptor, hard disk (or SSD), optical drive, keyboard, mouse, screen and so on. Each of those pieces of hardware is, in essence, interchangeable with others that perform the same functionality, but perhaps faster, more efficiently or more compactly.
With a desktop machine, you can pretty much replace the whole PC a bit at a time by upgrading each component. The modular nature of PCs sometimes seems almost miraculous (it's certainly not the case for cars - I can't just decide to pop a Ford engine into my Audi, for example) and it's only possible because the operating system smooths over any differences between components and hides them from us, usually through the use of specially written drivers.
In the beginning
Let's approach what an operating system does by considering what happens when you boot your PC. Pressing the power switch starts up a small program that's stored in read-only memory - a chip on the PC's motherboard.
This program, known as the BIOS (basic input/output system) on PCs and EFI (extensible firmware interface) on Macs, runs a set of routines intended to identify peripherals and system devices, and initialise their firmware. This is known as the POST (power-on self test) and, as its name suggests, is needed to check the proper functioning of the hardware.
POST routines, if the option is set, can also perform intensive checks like testing all installed memory, but the level of checking is usually curtailed for the sake of speed.
Both the BIOS and EFI are written for the particular motherboards they're found on. They have knowledge of the chipsets that are embedded in the motherboard and require changes in order to run on other systems. They're encoded on EEPROMs soldered onto the motherboard so they can be updated.
The BIOS is archaic - the archetypal legacy system, first implemented in the days of DOS PCs. Back then, you could access the BIOS directly from your applications using a well-known standard API (applications programming interface), circumventing DOS completely. You could almost view it as part of the operating system.
These days, despite the increased capability of modern PCs compared to their DOS forebears, the BIOS is still a 16-bit program running in 1MB of memory (the start point of the program is at address 0xFFFF0, 16 bytes below the 1MB mark).
EFI was designed as the successor to BIOS. It has better support for larger disks and the ability to boot from them, a much faster boot time (since it doesn't use backwards-compatible code to access chipsets and disks), 32-bit or 64-bit code, installable drivers and so on. Unfortunately for PC users, the BIOS maintains its stranglehold on our machines.
One of the final stages of the POST process is to identify and initialise the drives in the system. The BIOS has a table that defines the boot order for disks, and it starts looking through this table for a disk that contains an operating system.
To do this, it looks for the bootstrap loader on each disk in sequence. The BIOS doesn't know about filesystems (the filesystem is only loaded as part of the operating system). Instead, it only knows about sectors, heads and cylinders.
To find the bootstrap loader, the drive must be formatted in a peculiar way. In particular, the first 512-byte sector of the drive (found at cylinder 0, head 0, sector 1) has to be formatted as a master boot record (MBR).
Although small, the MBR has enough code to continue the loading of the installed operating system (each operating system has its own code), and also has a small table of partitions that the drive has been split into.
This table contains information about all the partitions on the drive. Since the MBR only contains enough room for the information on four partitions, that's what we're limited to for PC systems.
On the run
Once the BIOS finds an MBR, it loads it into memory and starts executing it. The MBR code identifies the primary active partition by looking at the partition table, reads the partition's boot loader (known as the volume boot record) from the disk into memory (the partition table stores where the VBR can be found), and starts executing it.
This second boot loader is the one that will load yet more sectors into memory, have more knowledge of the filesystem and continue the loading of the operating system. It will load the actual operating system boot loader (known as 'ntdlr' prior to Windows Vista and 'winload.exe' after that), and this boot loader will switch the CPU to 32-bit mode or, increasingly, to 64-bit mode.
At this point the BIOS is no longer used because Windows uses boot-level drivers in order to access the hardware.
For EFI, the process is largely similar, the difference being that EFI has knowledge of the filesystem and can load the boot loader directly. This is called BootX on a Mac. Once the operating system starts to boot, it initialises the various systems it moderates. These include the CPU, memory, on-board devices, persistence mechanisms, and app and user interfaces.

FIGURE 1: Major components in an operating system
For the CPU, the operating system not only switches it into 32-bit or 64-bit operation, but also virtualises the CPU for security and protection. The OS and its kernel drivers execute at the highest privilege at Ring 0, whereas normal user applications (such as the browser or a word processor) run at the lowest privilege in Ring 3 (there are four privilege or protection rings in modern Intel architecture, of which Windows uses only two).
The main reason for this is security; by ensuring that non-privileged programs can't corrupt the OS, either maliciously or accidentally, the system as a whole is made more stable.
By providing this level of protection, the OS can also abstract away certain physical limitations of the PC (say, the amount of memory) from user applications and then provide services like program swapping and multitasking, making the application easier to write and the overall system more efficient.
Time management
The operating system also manages how much CPU time each running application has to ensure that it gets work done. Most modern PCs have multiple CPUs, but let's imagine we only have one.
Only one application or process can use that CPU at any one time. To create the illusion of many applications running simultaneously, the OS will switch rapidly back and forth between the current set of programs, giving each one a small timeslice in which to get some work done.
Once the current application has run for a certain amount of time (the time slice), or has been suspended waiting for a resource or some form of user input, the OS will save its current state (register values, memory, current execution point), load the saved state for the next application in line, and start executing it.
After it's completed its timeslice or has suspended, the next process gets a turn using the CPU to get work done. This round-robin scheduling continues as long as the OS is running.
Sometimes an interrupt will occur needing immediate attention, in which case the current process is interrupted, its state is saved and the interrupt is serviced.
Modern 32-bit and 64-bit operating system provide services that manage memory for user applications, including the virtualisation of memory. In essence, the memory layout for a user application looks exactly the same to each one.
To ensure that different programs don't interfere with each other's memory structures, the OS doesn't provide access to physical memory directly. Instead, it maps user-mode memory through mapping tables called descriptor tables to either real memory or to the swap file on disk.
This provides a great deal of flexibility for the operating system: it can move memory blocks around to accommodate other programs' memory requirements without the original program knowing; it can swap out memory blocks to disk if the program isn't being used and it can defer assigning (or committing) memory requests until the memory is written to.
This mapping of memory through descriptor tables also means the OS gives every user application the convenient fiction that it's the only application running on the system. No applications will clash by trying to use the same memory, and an application can't cause another to crash by writing to its memory space.

FIGURE 2: Mapping virtual memory to physical memory and the swap file with a descriptor table
Figure 2 shows two programs, A and B, both with the same view of their memory space. Program A has one block of memory allocated at a particular position in its memory space; in reality it's found somewhere else in physical memory via the descriptor table. Program B has two blocks allocated, one of which is found in the swap file for the system.
Files and folders
Another important virtualised service provided by the OS is the filesystem. Disk hardware works on logical block addresses (LBAs), which are essentially numbers that define the sector number from the beginning of the drive volume.
The disk controller hardware converts the LBA into physical parameters (such as head, track and platter) to find the actual sector. For SSD drives, the disk controller simply converts the LBA into a memory address (although the controllers on SSDs will move data blocks around to even out access to the flash memory unbeknownst to the OS).
The operating system hides these raw LBAs from user programs by imposing a hierarchical filesystem over the disk. The filesystem organises the physical disk sectors (each one usually being 512 bytes in size, although the market is starting to move towards 4kB sectors since most filesystems use that block size as a minimum allocation and granular size for a file) into files and directories.
The filesystem is responsible for maintaining the mapping between files and blocks, which blocks appear in which files (and in which order they appear), which directory a file is found in, and other similar services.
The filesystem virtualisation also means that user programs only need to worry about high-level operations with files and folders: creating new ones, deleting existing ones, adding data to the end of a file, reading and writing to files and enumerating the folder contents.
All the mapping between userfriendly names and LBAs is done by the operating system under the hood. To the user program, a file is just a contiguous set of bytes somewhere on the disk and it doesn't have to work out that the file consists of a block over here, followed by that one over there.
APIs
This filesystem abstraction points to another set of services provided by the operating system: the application programming interfaces (also know as APIs).
These are plug-in points that let user programs like browsers and word processors to take advantage of various services exposed by the operating system. These include APIs for memory management, file and folder management, network management, user input (keyboard and mouse), the windowing user interface, multimedia (video and audio) and so on.
In all cases, the API provides a standardised way for user applications to obtain and use resources from the PC, no matter what hardware was actually present. So, for example, a user program doesn't have to know anything about which video adaptor or screen the PC is using in order to display something on it. It merely makes calls to the standard API ('draw a window here of this size') and the adaptor and screen drivers translate those standard requests to calls to the hardware that provide the required result.
That is perhaps the last part of the operating system story. It isn't a monolithic program, written to work with every single piece of hardware out there. It is instead a framework into which hardware-specific drivers are plugged.
These drivers know how to access their particular hardware, can translate between standard function calls and the requirements of the device, and are written to use the operating system's APIs.
Read More ...
Explained: How your operating system works
When we use a PC, we're usually only concerned with the program we're currently using, whether it's a browser, a word processor or our preferred social networking app. We don't often think about the rather extensive program underlying everything that happens on the PC: its operating system. We can fire off the name of our chosen OS at the drop of a hat (Windows 7, Chromium or Ubuntu, for example), but could we say just what it is and what it does?
When we think of an operating system at all, it's usually to help define what our PC is - 2.66GHz Core 2 Duo, 4GB RAM, 256GB SSD, Windows 7, for example. It's almost like a physical peripheral; something we could swap for a similar substitute in the future.
And until now, the operating system defined a PC just like the hardware. We've grown up with large, all-encompassing operating systems that provided a wealth of services, but that might be on the cusp of changing.
Let's start by considering what a PC is. It's a collection of hardware: the motherboard, CPU, memory chips, video adaptor, hard disk (or SSD), optical drive, keyboard, mouse, screen and so on. Each of those pieces of hardware is, in essence, interchangeable with others that perform the same functionality, but perhaps faster, more efficiently or more compactly.
With a desktop machine, you can pretty much replace the whole PC a bit at a time by upgrading each component. The modular nature of PCs sometimes seems almost miraculous (it's certainly not the case for cars - I can't just decide to pop a Ford engine into my Audi, for example) and it's only possible because the operating system smooths over any differences between components and hides them from us, usually through the use of specially written drivers.
In the beginning
Let's approach what an operating system does by considering what happens when you boot your PC. Pressing the power switch starts up a small program that's stored in read-only memory - a chip on the PC's motherboard.
This program, known as the BIOS (basic input/output system) on PCs and EFI (extensible firmware interface) on Macs, runs a set of routines intended to identify peripherals and system devices, and initialise their firmware. This is known as the POST (power-on self test) and, as its name suggests, is needed to check the proper functioning of the hardware.
POST routines, if the option is set, can also perform intensive checks like testing all installed memory, but the level of checking is usually curtailed for the sake of speed.
Both the BIOS and EFI are written for the particular motherboards they're found on. They have knowledge of the chipsets that are embedded in the motherboard and require changes in order to run on other systems. They're encoded on EEPROMs soldered onto the motherboard so they can be updated.
The BIOS is archaic - the archetypal legacy system, first implemented in the days of DOS PCs. Back then, you could access the BIOS directly from your applications using a well-known standard API (applications programming interface), circumventing DOS completely. You could almost view it as part of the operating system.
These days, despite the increased capability of modern PCs compared to their DOS forebears, the BIOS is still a 16-bit program running in 1MB of memory (the start point of the program is at address 0xFFFF0, 16 bytes below the 1MB mark).
EFI was designed as the successor to BIOS. It has better support for larger disks and the ability to boot from them, a much faster boot time (since it doesn't use backwards-compatible code to access chipsets and disks), 32-bit or 64-bit code, installable drivers and so on. Unfortunately for PC users, the BIOS maintains its stranglehold on our machines.
One of the final stages of the POST process is to identify and initialise the drives in the system. The BIOS has a table that defines the boot order for disks, and it starts looking through this table for a disk that contains an operating system.
To do this, it looks for the bootstrap loader on each disk in sequence. The BIOS doesn't know about filesystems (the filesystem is only loaded as part of the operating system). Instead, it only knows about sectors, heads and cylinders.
To find the bootstrap loader, the drive must be formatted in a peculiar way. In particular, the first 512-byte sector of the drive (found at cylinder 0, head 0, sector 1) has to be formatted as a master boot record (MBR).
Although small, the MBR has enough code to continue the loading of the installed operating system (each operating system has its own code), and also has a small table of partitions that the drive has been split into.
This table contains information about all the partitions on the drive. Since the MBR only contains enough room for the information on four partitions, that's what we're limited to for PC systems.
On the run
Once the BIOS finds an MBR, it loads it into memory and starts executing it. The MBR code identifies the primary active partition by looking at the partition table, reads the partition's boot loader (known as the volume boot record) from the disk into memory (the partition table stores where the VBR can be found), and starts executing it.
This second boot loader is the one that will load yet more sectors into memory, have more knowledge of the filesystem and continue the loading of the operating system. It will load the actual operating system boot loader (known as 'ntdlr' prior to Windows Vista and 'winload.exe' after that), and this boot loader will switch the CPU to 32-bit mode or, increasingly, to 64-bit mode.
At this point the BIOS is no longer used because Windows uses boot-level drivers in order to access the hardware.
For EFI, the process is largely similar, the difference being that EFI has knowledge of the filesystem and can load the boot loader directly. This is called BootX on a Mac. Once the operating system starts to boot, it initialises the various systems it moderates. These include the CPU, memory, on-board devices, persistence mechanisms, and app and user interfaces.

FIGURE 1: Major components in an operating system
For the CPU, the operating system not only switches it into 32-bit or 64-bit operation, but also virtualises the CPU for security and protection. The OS and its kernel drivers execute at the highest privilege at Ring 0, whereas normal user applications (such as the browser or a word processor) run at the lowest privilege in Ring 3 (there are four privilege or protection rings in modern Intel architecture, of which Windows uses only two).
The main reason for this is security; by ensuring that non-privileged programs can't corrupt the OS, either maliciously or accidentally, the system as a whole is made more stable.
By providing this level of protection, the OS can also abstract away certain physical limitations of the PC (say, the amount of memory) from user applications and then provide services like program swapping and multitasking, making the application easier to write and the overall system more efficient.
Time management
The operating system also manages how much CPU time each running application has to ensure that it gets work done. Most modern PCs have multiple CPUs, but let's imagine we only have one.
Only one application or process can use that CPU at any one time. To create the illusion of many applications running simultaneously, the OS will switch rapidly back and forth between the current set of programs, giving each one a small timeslice in which to get some work done.
Once the current application has run for a certain amount of time (the time slice), or has been suspended waiting for a resource or some form of user input, the OS will save its current state (register values, memory, current execution point), load the saved state for the next application in line, and start executing it.
After it's completed its timeslice or has suspended, the next process gets a turn using the CPU to get work done. This round-robin scheduling continues as long as the OS is running.
Sometimes an interrupt will occur needing immediate attention, in which case the current process is interrupted, its state is saved and the interrupt is serviced.
Modern 32-bit and 64-bit operating system provide services that manage memory for user applications, including the virtualisation of memory. In essence, the memory layout for a user application looks exactly the same to each one.
To ensure that different programs don't interfere with each other's memory structures, the OS doesn't provide access to physical memory directly. Instead, it maps user-mode memory through mapping tables called descriptor tables to either real memory or to the swap file on disk.
This provides a great deal of flexibility for the operating system: it can move memory blocks around to accommodate other programs' memory requirements without the original program knowing; it can swap out memory blocks to disk if the program isn't being used and it can defer assigning (or committing) memory requests until the memory is written to.
This mapping of memory through descriptor tables also means the OS gives every user application the convenient fiction that it's the only application running on the system. No applications will clash by trying to use the same memory, and an application can't cause another to crash by writing to its memory space.

FIGURE 2: Mapping virtual memory to physical memory and the swap file with a descriptor table
Figure 2 shows two programs, A and B, both with the same view of their memory space. Program A has one block of memory allocated at a particular position in its memory space; in reality it's found somewhere else in physical memory via the descriptor table. Program B has two blocks allocated, one of which is found in the swap file for the system.
Files and folders
Another important virtualised service provided by the OS is the filesystem. Disk hardware works on logical block addresses (LBAs), which are essentially numbers that define the sector number from the beginning of the drive volume.
The disk controller hardware converts the LBA into physical parameters (such as head, track and platter) to find the actual sector. For SSD drives, the disk controller simply converts the LBA into a memory address (although the controllers on SSDs will move data blocks around to even out access to the flash memory unbeknownst to the OS).
The operating system hides these raw LBAs from user programs by imposing a hierarchical filesystem over the disk. The filesystem organises the physical disk sectors (each one usually being 512 bytes in size, although the market is starting to move towards 4kB sectors since most filesystems use that block size as a minimum allocation and granular size for a file) into files and directories.
The filesystem is responsible for maintaining the mapping between files and blocks, which blocks appear in which files (and in which order they appear), which directory a file is found in, and other similar services.
The filesystem virtualisation also means that user programs only need to worry about high-level operations with files and folders: creating new ones, deleting existing ones, adding data to the end of a file, reading and writing to files and enumerating the folder contents.
All the mapping between userfriendly names and LBAs is done by the operating system under the hood. To the user program, a file is just a contiguous set of bytes somewhere on the disk and it doesn't have to work out that the file consists of a block over here, followed by that one over there.
APIs
This filesystem abstraction points to another set of services provided by the operating system: the application programming interfaces (also know as APIs).
These are plug-in points that let user programs like browsers and word processors to take advantage of various services exposed by the operating system. These include APIs for memory management, file and folder management, network management, user input (keyboard and mouse), the windowing user interface, multimedia (video and audio) and so on.
In all cases, the API provides a standardised way for user applications to obtain and use resources from the PC, no matter what hardware was actually present. So, for example, a user program doesn't have to know anything about which video adaptor or screen the PC is using in order to display something on it. It merely makes calls to the standard API ('draw a window here of this size') and the adaptor and screen drivers translate those standard requests to calls to the hardware that provide the required result.
That is perhaps the last part of the operating system story. It isn't a monolithic program, written to work with every single piece of hardware out there. It is instead a framework into which hardware-specific drivers are plugged.
These drivers know how to access their particular hardware, can translate between standard function calls and the requirements of the device, and are written to use the operating system's APIs.
Read More ...
Tutorial: How to clean your PC of dust and dirt
Tips for improving computer performance usually concentrate on streamlining and maintaining operating systems, boosting speed with new RAM, upgrading video and so on. However, you can give your machine a speed and reliability upgrade easily with the help of a vacuum cleaner and a soft brush. A build-up of dust on vents, components and fans ruins your machine's ability to keep its cool, and when a computer runs at a high temperature, it goes more slowly.
In the worst-case scenario, your cards, power supply units and motherboards can fail entirely. On a more basic level, dirt and dust can gum up moving parts and affect performance.
Here, we'll show you how to physically clean your PC, keyboard and monitor. As a bonus, we'll also tell you how to keep your computer grime-free once you've fettled it. You'll add years to the life of your hardware and improve its performance.
Before you begin, remember that PC cleaning is a serious job that - depending on how far you want to take it - will require some technical skills. As a gauge, if you're comfortable with fitting new memory or upgrading a video card in your PC, you should be able to complete all the steps.
Gear up, power down
Start by assembling your tools. You'll need a small, soft brush - the kind you might use for painting a window or door frame. Make-up brushes are also ideal. Go for the best quality you can afford, because economy ones often tend to shed hairs.
A can of compressed air, which should be available from most computer retailers and hardware shops, is also required. Make sure you have soft, general cleaning cloths for the exterior of your machine and the computer's cabling.

The final essential tool is a full-sized vacuum cleaner with a nozzle attachment, or a fully charged handheld device. Some other tools may be handy, but aren't necessities. For example, an anti-static wristband will prove useful once you've opened up the computer.
You might also want to use a switch cleaner, which is a spray solvent that eats dust and can be used on ports and contacts. These aids can be bought cheaply from Maplin or larger computer retailers.
Switch off your computer and unplug it from the mains. If you've been using it, you should leave it to stand for at least 30 minutes before you begin the cleaning routine. This will give internal components a chance to cool down, and also reduce the risk of electric shock from any stored charge that may potentially injure you or damage your computer.
Carefully unplug all your peripherals and input devices, then set the cables to one side, because you'll be giving them special attention.
Place your computer on a raised surface - an empty table or desk will do fine. Attempting to spring clean with the computer on the floor or in another awkward place will just make things more difficult. You're now ready to begin.

Start with the easy part - cleaning the computer's exterior. Using a vacuum cleaner hose or handheld vac, remove dust from vents and any visible USB, video and networking ports. Dust can get into infrequently used ports, increasing the risk of malfunction.
Be careful when working near fans, because causing them to spin in the wrong direction can damage their operation.
When the excess dust has been removed, carefully wipe down the exterior of the case. If there's any sticky grime on there you can use a very damp cloth or a little household surface cleaner to get rid of it. Take care not to go near any ports or vents with liquid.
It's now time to open up your machine and begin the serious bit of the exercise. With most modern computers, you should be able to remove the side panel using a catch, but on older machines you may have to undo a couple of screws first. Your aim is to get inside the case so you can see the damage caused by months of dirt.
Inside and out
You've now reached your first decision point. If you're happy with the technical aspects of computer maintenance, proceed with caution. If you're less confident, we suggest skipping over this bit and simply vacuuming the interior.

If you're feeling brave, put on your anti-static wristband. If you don't have one, touch something metal like a radiator to discharge any static that's built up before you begin.
To clean inside the case and around the motherboard as effectively as possible, it's best to remove any add-on cards. These can include ones for video and audio, networks and port extensions. You can also take out memory chips carefully, but only if you're happy about doing so.
Place the removed components on a clean and clear surface. If you're a completist who's keen to have a spotless PC, you can also remove any internal connector cables. It's best to leave power supply cables - the yellow, red and black leads feeding into drives and other components - in place. IDE ribbon cables, SATA cables and audio connectors can be unplugged and set aside.
Dust buster
You may be amazed at the amount of dust that can accumulate inside a PC case. It's not unusual to find spiders' webs alongside the balls of fluff and general detritus. With the case open and exposed, you can vacuum most of what's built up straight out using the hose from your cleaner.
Be careful when you get near fans, and avoid nudging or touching your PC's components with the nozzle.
When you've removed all the dust that's easy to vacuum out, it's time to turn to the brush and can of compressed air. Starting from the top of the case, use the brush to gently swish any dust off the motherboard and slots.

Compressed air can then be used to dislodge more stubborn grime, but make sure you only use it in very short bursts and follow up with a sweep of the brush, moving the dust out of the case. These short attacks are highly important, because anything longer can introduce moisture to your system, possibly causing a short circuit.
Now wipe down the bare areas of the case with a clean, dry cloth, being careful to avoid electrical parts. Always use the brush for this - you risk leaving behind conductive material otherwise.
If there are any particularly stubborn areas of grime - more likely if you're a smoker - you have another choice. If the dirt is on the case interior, you can use a small amount of surgical spirit on a lint-free cloth to wipe at these spots carefully. If the ground-in dirt is on electrical components, though, you may do more harm than good trying to remove it. You'll just have to live with it.
Side project
As an advanced user, you may want to finish your spring clean with a bit of light repair, especially if you've noticed a noisy fan or two in your system recently. Case fans are usually closed systems, with internal lubrication that should last a lifetime. Occasionally, dirt can compromise that system, soaking up lubricating oil or enabling it to dry up. The result can be a noisier, hotter computer.

In this case, you can try a drop of sewing machine oil in the centre of the fan. You'll find the stuff on Amazon for about آ£3 a bottle. You'll need to remove the fan from the case before applying lubricant, but that's not a difficult job.
Carefully detach the power cable from the motherboard first, then remove a screw from each corner of the fan. Pull the fan free of the case. In the centre of the fan, there should be a sticker. Peel that back carefully and put it somewhere safe.
You should see a rubber or plastic plug underneath the sticker. Remove this and add one drop of oil to the spindle. That's all you should need to get things moving smoothly again. Replace the plug and sticker, then carefully reinstall the fan in your machine.
Reassembly
It's now time to return to the components you took out of the PC and left to one side. Clean them individually with the soft brush, wiping away any excess dust. Hold the parts by the edges, being careful not to touch any contacts.
If you have a blower brush - a tool commonly used in camera cleaning - this will be an excellent tool for the task. When the kit has been cleaned, you can reassemble the PC.

Make sure the cards and memory chips are properly seated first, and if you removed any cables earlier, wipe them clean with a dry cloth and reconnect them. Have one final check to make sure they're connected firmly and correctly.
Remove any cleaning materials or obstructions before closing up the case. Leave the system to sit for 30 minutes, just in case all that blowing and wiping introduced any moisture.
Finally, connect the keyboard, mouse, monitor and power cable, with no other peripherals attached. Then switch on the machine to test that it's still working. If all has gone well, you should be able to enjoy a quieter, cooler and cleaner PC.
Read More ...
Review: Rega Saturn and Mira 3
You'll notice that the Rega Saturn CD player is a good deal dearer than the Mira 3 amp and one of its more upmarket touches is an aluminium front panel, whereas the Mira amp has plastic. Despite that, they match very well visually and the control illuminations chime pleasantly together, too.
Again, control layout is somewhat out of the ordinary, with the amp apparently lacking a knob; as with the Audio Analogue Crescendo, the volume knob also selects inputs. In this case, you press it to convert to selector mode, then rotate to select. You soon get used to pressing it a second time to go back to volume mode, which it otherwise does automatically after a few seconds.
Once again, it's possible to see where costs have been cut, in the amp at least, but it's nothing we'd feel inclined to complain about.
The mains transformer isn't the biggest ever, but it's more than good enough for the rated output and the main amplifying circuit is neatly executed with discrete transistors. There's a phono stage built-in and full record output and monitoring.
Output connectors are nickel-plated rather than gold, but that's probably more due to Rega's noted disdain for tweakery than cost-saving!

The Saturn employs a top-loading transport, which is very quick and pleasant to use and commits to audio purity via a pair of latest-generation Wolfson DAC chips and discrete-transistor output circuits.
Sound quality
Our listeners were quick to point to a slight lack of bass from this combination, but it doesn't seem to have interfered very seriously with their enjoyment of the sound. Indeed, one pointed out in the very same sentence that this was one of the most foot-tapping presentations, which certainly serves well in any strongly rhythmic music.
At the same time, there is a good degree of clarity in the sound, with detail always present, but not unduly spotlit. Interestingly, two comments on the same track specifically mention the lack of any harshness or 'nasties' – this kind of double-negative is uncommon in our experience and taken together with the general tone of the comments, we feel it may be because there wasn't much to say beyond 'it sounds right'.
Intrigued by the bass character, we separated the units and listened to each with familiar references. It seems clear that the amp is responsible for this transgression and indeed the Saturn CD player is, if anything, quite strikingly full-voiced in the lower reaches.
The two are well-matched in terms of detail, though it's also worth noting that the phono stage in the Mira restores some of the neutrality that's missing via line inputs. For serious vinyl lovers, we would recommend something a little fancier.
Related Links
Read More ...
Review: Rega Saturn and Mira 3
You'll notice that the Rega Saturn CD player is a good deal dearer than the Mira 3 amp and one of its more upmarket touches is an aluminium front panel, whereas the Mira amp has plastic. Despite that, they match very well visually and the control illuminations chime pleasantly together, too.
Again, control layout is somewhat out of the ordinary, with the amp apparently lacking a knob; as with the Audio Analogue Crescendo, the volume knob also selects inputs. In this case, you press it to convert to selector mode, then rotate to select. You soon get used to pressing it a second time to go back to volume mode, which it otherwise does automatically after a few seconds.
Once again, it's possible to see where costs have been cut, in the amp at least, but it's nothing we'd feel inclined to complain about.
The mains transformer isn't the biggest ever, but it's more than good enough for the rated output and the main amplifying circuit is neatly executed with discrete transistors. There's a phono stage built-in and full record output and monitoring.
Output connectors are nickel-plated rather than gold, but that's probably more due to Rega's noted disdain for tweakery than cost-saving!

The Saturn employs a top-loading transport, which is very quick and pleasant to use and commits to audio purity via a pair of latest-generation Wolfson DAC chips and discrete-transistor output circuits.
Sound quality
Our listeners were quick to point to a slight lack of bass from this combination, but it doesn't seem to have interfered very seriously with their enjoyment of the sound. Indeed, one pointed out in the very same sentence that this was one of the most foot-tapping presentations, which certainly serves well in any strongly rhythmic music.
At the same time, there is a good degree of clarity in the sound, with detail always present, but not unduly spotlit. Interestingly, two comments on the same track specifically mention the lack of any harshness or 'nasties' – this kind of double-negative is uncommon in our experience and taken together with the general tone of the comments, we feel it may be because there wasn't much to say beyond 'it sounds right'.
Intrigued by the bass character, we separated the units and listened to each with familiar references. It seems clear that the amp is responsible for this transgression and indeed the Saturn CD player is, if anything, quite strikingly full-voiced in the lower reaches.
The two are well-matched in terms of detail, though it's also worth noting that the phono stage in the Mira restores some of the neutrality that's missing via line inputs. For serious vinyl lovers, we would recommend something a little fancier.
Related Links
Read More ...
Tutorial: How to clean your PC of dust and dirt
Tips for improving computer performance usually concentrate on streamlining and maintaining operating systems, boosting speed with new RAM, upgrading video and so on. However, you can give your machine a speed and reliability upgrade easily with the help of a vacuum cleaner and a soft brush. A build-up of dust on vents, components and fans ruins your machine's ability to keep its cool, and when a computer runs at a high temperature, it goes more slowly.
In the worst-case scenario, your cards, power supply units and motherboards can fail entirely. On a more basic level, dirt and dust can gum up moving parts and affect performance.
Here, we'll show you how to physically clean your PC, keyboard and monitor. As a bonus, we'll also tell you how to keep your computer grime-free once you've fettled it. You'll add years to the life of your hardware and improve its performance.
Before you begin, remember that PC cleaning is a serious job that - depending on how far you want to take it - will require some technical skills. As a gauge, if you're comfortable with fitting new memory or upgrading a video card in your PC, you should be able to complete all the steps.
Gear up, power down
Start by assembling your tools. You'll need a small, soft brush - the kind you might use for painting a window or door frame. Make-up brushes are also ideal. Go for the best quality you can afford, because economy ones often tend to shed hairs.
A can of compressed air, which should be available from most computer retailers and hardware shops, is also required. Make sure you have soft, general cleaning cloths for the exterior of your machine and the computer's cabling.

The final essential tool is a full-sized vacuum cleaner with a nozzle attachment, or a fully charged handheld device. Some other tools may be handy, but aren't necessities. For example, an anti-static wristband will prove useful once you've opened up the computer.
You might also want to use a switch cleaner, which is a spray solvent that eats dust and can be used on ports and contacts. These aids can be bought cheaply from Maplin or larger computer retailers.
Switch off your computer and unplug it from the mains. If you've been using it, you should leave it to stand for at least 30 minutes before you begin the cleaning routine. This will give internal components a chance to cool down, and also reduce the risk of electric shock from any stored charge that may potentially injure you or damage your computer.
Carefully unplug all your peripherals and input devices, then set the cables to one side, because you'll be giving them special attention.
Place your computer on a raised surface - an empty table or desk will do fine. Attempting to spring clean with the computer on the floor or in another awkward place will just make things more difficult. You're now ready to begin.

Start with the easy part - cleaning the computer's exterior. Using a vacuum cleaner hose or handheld vac, remove dust from vents and any visible USB, video and networking ports. Dust can get into infrequently used ports, increasing the risk of malfunction.
Be careful when working near fans, because causing them to spin in the wrong direction can damage their operation.
When the excess dust has been removed, carefully wipe down the exterior of the case. If there's any sticky grime on there you can use a very damp cloth or a little household surface cleaner to get rid of it. Take care not to go near any ports or vents with liquid.
It's now time to open up your machine and begin the serious bit of the exercise. With most modern computers, you should be able to remove the side panel using a catch, but on older machines you may have to undo a couple of screws first. Your aim is to get inside the case so you can see the damage caused by months of dirt.
Inside and out
You've now reached your first decision point. If you're happy with the technical aspects of computer maintenance, proceed with caution. If you're less confident, we suggest skipping over this bit and simply vacuuming the interior.

If you're feeling brave, put on your anti-static wristband. If you don't have one, touch something metal like a radiator to discharge any static that's built up before you begin.
To clean inside the case and around the motherboard as effectively as possible, it's best to remove any add-on cards. These can include ones for video and audio, networks and port extensions. You can also take out memory chips carefully, but only if you're happy about doing so.
Place the removed components on a clean and clear surface. If you're a completist who's keen to have a spotless PC, you can also remove any internal connector cables. It's best to leave power supply cables - the yellow, red and black leads feeding into drives and other components - in place. IDE ribbon cables, SATA cables and audio connectors can be unplugged and set aside.
Dust buster
You may be amazed at the amount of dust that can accumulate inside a PC case. It's not unusual to find spiders' webs alongside the balls of fluff and general detritus. With the case open and exposed, you can vacuum most of what's built up straight out using the hose from your cleaner.
Be careful when you get near fans, and avoid nudging or touching your PC's components with the nozzle.
When you've removed all the dust that's easy to vacuum out, it's time to turn to the brush and can of compressed air. Starting from the top of the case, use the brush to gently swish any dust off the motherboard and slots.

Compressed air can then be used to dislodge more stubborn grime, but make sure you only use it in very short bursts and follow up with a sweep of the brush, moving the dust out of the case. These short attacks are highly important, because anything longer can introduce moisture to your system, possibly causing a short circuit.
Now wipe down the bare areas of the case with a clean, dry cloth, being careful to avoid electrical parts. Always use the brush for this - you risk leaving behind conductive material otherwise.
If there are any particularly stubborn areas of grime - more likely if you're a smoker - you have another choice. If the dirt is on the case interior, you can use a small amount of surgical spirit on a lint-free cloth to wipe at these spots carefully. If the ground-in dirt is on electrical components, though, you may do more harm than good trying to remove it. You'll just have to live with it.
Side project
As an advanced user, you may want to finish your spring clean with a bit of light repair, especially if you've noticed a noisy fan or two in your system recently. Case fans are usually closed systems, with internal lubrication that should last a lifetime. Occasionally, dirt can compromise that system, soaking up lubricating oil or enabling it to dry up. The result can be a noisier, hotter computer.

In this case, you can try a drop of sewing machine oil in the centre of the fan. You'll find the stuff on Amazon for about آ£3 a bottle. You'll need to remove the fan from the case before applying lubricant, but that's not a difficult job.
Carefully detach the power cable from the motherboard first, then remove a screw from each corner of the fan. Pull the fan free of the case. In the centre of the fan, there should be a sticker. Peel that back carefully and put it somewhere safe.
You should see a rubber or plastic plug underneath the sticker. Remove this and add one drop of oil to the spindle. That's all you should need to get things moving smoothly again. Replace the plug and sticker, then carefully reinstall the fan in your machine.
Reassembly
It's now time to return to the components you took out of the PC and left to one side. Clean them individually with the soft brush, wiping away any excess dust. Hold the parts by the edges, being careful not to touch any contacts.
If you have a blower brush - a tool commonly used in camera cleaning - this will be an excellent tool for the task. When the kit has been cleaned, you can reassemble the PC.

Make sure the cards and memory chips are properly seated first, and if you removed any cables earlier, wipe them clean with a dry cloth and reconnect them. Have one final check to make sure they're connected firmly and correctly.
Remove any cleaning materials or obstructions before closing up the case. Leave the system to sit for 30 minutes, just in case all that blowing and wiping introduced any moisture.
Finally, connect the keyboard, mouse, monitor and power cable, with no other peripherals attached. Then switch on the machine to test that it's still working. If all has gone well, you should be able to enjoy a quieter, cooler and cleaner PC.
Read More ...
Review: Exposure 2010S2
Exposure has always been the epitome of fuss-free hi-fi. The equipment is typically well put together but not flashy, adequate but not excessive on the features front and generally quite low key in a comfortingly confident sort of way. As with the Audio Analogue Crescendo, they are built to a tough price, but the savings necessary to do this have been achieved thoughtfully.
Take the case design, for instance: the complete case of both units is made of aluminium – believed by many to be preferable to steel because of its non-magnetic properties. It's more expensive than steel, but Exposure has taken advantage of it in other ways, notably in the amplifier where its good thermal conduction is utilised in making the entire base the heatsink, saving a sigificant cost.
Sure, that won't allow full-power operation for long periods, but music doesn't work like that and we found no signs of distress in practice.
These may be the latest versions of the long-standing 2010 range, but electronic construction of both units is distinctly old-school, with through-hole components everywhere apart from the DAC chip and a few control parts; the amplifier even uses a single-sided circuit board.
Each unit has a decent-size mains transformer and the CD player uses multiple parallel power supply capacitors and several regulators for the various circuit stages. The disc transport is a dedicated audio one – in our review sample it was a bit noisy mechanically, producing a rather louder hiss than we'd ideally care to have around the listening room.
The CD player's digital output is on a BNC, theoretically better than a phono though possibly a wasted effort given how often people end up with a BNC-phono adaptor in circuit.

Over at the amp, features include a preamp output and the option of converting one line input to phono with an inexpensive add-on circuit board.
Sound quality
The fuss-free approach certainly seemed to do it for our listening panel, who were, of course, unaware of which combination they were listening to at the time.
From the outset, the 2010S2 units drew almost unqualified praise for their performance, covering both technical aspects and general musical qualities. Above all, the listeners agreed that this combo really got to the heart of the matter and simply played music that one really wants to listen to.
They are energetic and full of sparkle and life, with good attack and body to the sound, too and the results are highly convincing across the full range of musical styles.
Related Links
Read More ...
Review: Exposure 2010S2
Exposure has always been the epitome of fuss-free hi-fi. The equipment is typically well put together but not flashy, adequate but not excessive on the features front and generally quite low key in a comfortingly confident sort of way. As with the Audio Analogue Crescendo, they are built to a tough price, but the savings necessary to do this have been achieved thoughtfully.
Take the case design, for instance: the complete case of both units is made of aluminium – believed by many to be preferable to steel because of its non-magnetic properties. It's more expensive than steel, but Exposure has taken advantage of it in other ways, notably in the amplifier where its good thermal conduction is utilised in making the entire base the heatsink, saving a sigificant cost.
Sure, that won't allow full-power operation for long periods, but music doesn't work like that and we found no signs of distress in practice.
These may be the latest versions of the long-standing 2010 range, but electronic construction of both units is distinctly old-school, with through-hole components everywhere apart from the DAC chip and a few control parts; the amplifier even uses a single-sided circuit board.
Each unit has a decent-size mains transformer and the CD player uses multiple parallel power supply capacitors and several regulators for the various circuit stages. The disc transport is a dedicated audio one – in our review sample it was a bit noisy mechanically, producing a rather louder hiss than we'd ideally care to have around the listening room.
The CD player's digital output is on a BNC, theoretically better than a phono though possibly a wasted effort given how often people end up with a BNC-phono adaptor in circuit.

Over at the amp, features include a preamp output and the option of converting one line input to phono with an inexpensive add-on circuit board.
Sound quality
The fuss-free approach certainly seemed to do it for our listening panel, who were, of course, unaware of which combination they were listening to at the time.
From the outset, the 2010S2 units drew almost unqualified praise for their performance, covering both technical aspects and general musical qualities. Above all, the listeners agreed that this combo really got to the heart of the matter and simply played music that one really wants to listen to.
They are energetic and full of sparkle and life, with good attack and body to the sound, too and the results are highly convincing across the full range of musical styles.
Related Links
Read More ...
Best Linux distros for netbooks
There are now some fantastic varieties of Linux that make ideal replacements for the operating system that's currently installed on your netbook. The latest version of Ubuntu UNR, for example, features brilliant hardware support alongside the expanding Unity interface, which Canonical is pinning its hopes on as a Gnome replacement for the next mainstream Ubuntu release.
Moblin and Maemo have also combined to create a new netbook operating system, known as MeeGo, and version 1.1 is a great choice if you particularly enjoy social networking through a streamlined interface.
Then there's Jolicloud to consider - a connected Linux-based operating system that blends local applications and storage seamlessly with those offered by the cloud. It's had some fantastic reviews and has been updated very recently.
Any one of these would make a great replacement for an older netbook distribution, such as Xandros or Linpus, and can even make better sense than a Windows installation if you can do without the compatibility offered by Microsoft's OS.
New Linux netbook distributions have the advantages of active community support and development, but the best thing about this list of distros is that they can all be installed on your machine at the same time. The only trick here is knowing how to do it.
1. Install USB stick

Most netbook Linux distributions use a custom utility that turns a downloaded ISO of the distribution into a USB-bootable installation. This is because most netbooks don't have an optical drive and will default to booting from a USB device if one is detected.
You'll need to go through this process for every distribution you want to install, but start with UNR.
Like Ubuntu, it first boots to a live desktop mode that you can then use to prepare your netbook's hard drive for as many distributions as you want to install on it. Before you get to that point, though, get hold of the UNR ISO and place the file on your desktop.
From an Ubuntu desktop, open the Administration menu and click 'Startup Disk Creator'. In the utility that appears, select the 'Other' button and use the file requester to find the UNR ISO.
Back in the main window, insert your USB stick and make sure it's detected in the 'Disk to use' list. You should also erase whatever files you may happen to have on the device before clicking the 'Make startup disk' button.
A few minutes later, you should see the 'Installation complete' message. It's now time to move the USB stick to your netbook.
2. Partitioning

Your netbook should automatically boot from your USB stick if you start the system with the stick installed. Booting UNR should take a few moments more, and when the main desktop appears, you can choose between 'Try Ubuntu netbook' and 'Install Ubuntu netbook'.
Because we want to use a graphical tool to repartition the internal drive first, you need to choose the first option. This will drop you onto a proper live desktop without stepping through the installation.
The tool we now need to open is the partition manager, which can be accessed by clicking the 'Applications' icon in the left border toolbar, followed by 'GParted' in the list. This should be familiar if you've ever done some partition tinkering.
Each partition is visualised within a block representing your drive, and you can click and drag on this to delete or resize your current configuration, or create a new one. You'll get the best possible results by removing all existing partitions and creating a new one for each operating system you want to install.
Make sure you select 'ext4' as the filesystem for each, and that you add a 2GB swap partition to the end of the drive. You'll obviously lose all data currently on the drive, so make sure none of it is left behind.
If you've already got Windows on the machine, it's also possible to resize its partition to make space for new ones, but you should still back up your Windows data. When you've finished making changes, click the 'Apply' icon.
3. Installing UNR

With the partitions created, click the 'Install Ubuntu Netbook' button in the top of the left panel toolbar. This will launch the same installer you might have seen when installing the desktop model of Ubuntu. The new version will even install updates in the background while you answer the simple questions asked by the installer.
The only way in which it differs from a default installation is that you need to make sure you select 'Specify partitions manually' from the second step. This ensures UNR will use the partition you've already created for it, and that it doesn't try to create its own new partition table.
From the 'Allocate drive space' window that appears, select the partition you want UNR to use, click 'Change', select 'ext4' from the Use As menu and give it a mount point of '/', which means the root partition. Now click 'Install now' to enable the installer to continue with its mission.
When it comes to installing any other distribution you want alongside UNR, you'll need to make sure you use its equivalent to the manual partition mode so that you can choose a new partition and apply a mount point for the new distribution. If you don't, there's a good chance your distribution will try to use your entire hard drive.

However, the boot menu for each distribution should be modified automatically. When the UNR installer has finished, you should find that you can restart your machine, select 'Ubuntu' from the boot menu and use your new Linux desktop. You should now attempt to set up a second distribution.
4. Installing Jolicloud 1.1

The Jolicloud ISO needs to be installed into your USB stick using another utility specific to the distribution. There are versions for Linux, Windows and OS X, and these can be found at http://help.jolicloud.com.
The Linux USB creator is a script, and after you've downloaded it you'll need to open a command line terminal, cd, to the directory where the script is located. Then type chmod +x followed by the script name to make it executable.
You need to make sure you've got the 'python-qt4' package installed, since this is used by the script to provide a GUI. Then you can run the script by typing ./scriptname within the directory containing it. This will open the application.
Then use the 'Browse' button to navigate to your Jolicloud ISO, found in your home directory, and to make sure your USB stick is correctly identified. Click 'Create' to start the process. Also ensure that the USB stick is unmounted (ejected) from the desktop.
With the ISO safely on the USB stick, you can now restart your netbook with the USB device inserted. You should see the Jolicloud boot menu; from there, select the 'Install' option. Jolicloud is based on an older version of Ubuntu - 10.04 - and as a result, you should find the installer familiar.
You'll need to choose the manual partitioning option and select another unused partition from those you created earlier, in exactly the same way you did for the UNR installation. When the process is complete, reboot your machine.
Jolicloud should now be available as another option from the boot menu.
5. Installing MeeGo

MeeGo is the product of a collaboration between Nokia and Intel, and the netbook version is closely related to the Moblin operating system. As a result, its install process is slightly different to the usual Ubuntu way of doing things.
You'll still need to boot the system from a live USB stick, but getting the installer onto that stick in the first place involves a little command line trickery. This is because MeeGo is supplied as an IMG file rather than an ISO file, so it just needs to be written, byte-for-byte, to the USB stick.
The command for doing this is dd, but there are a couple of important considerations to bear in mind.
Firstly, you'll need to know the device name of your USB stick. If you get this wrong and select something else by mistake, data will be lost. Secondly, your USB stick will also be wiped and repartitioned as part of the process. To restore it to its original capacity for future use, you'll need to use the same partition tool we used on the hard drive.
After you've grabbed the IMG file, switch to the command line and insert your USB stick. Next, type dmesg. This will print the system's log to the screen, and you'll need to look for the last few lines of output. This is where the system will have detected the USB stick you've just inserted and reported which device node it's connected to.

The output should look something like 'sdb: sdb1'. You now need to unmount this device using the unmount command, before executing dd if=meego.img of=/dev/your_ device bs=1m.
If you've hit the correct device, you should see the access LED on your stick flashing. This process may take as long as 50 minutes, depending on the speed of your USB port. After it's completed, you'll be able to remove the USB stick and switch it to your netbook for booting.
Choose the 'Install only' option when you reach the USB boot menu. The partition settings are configured on the third step of the installer, and you need to select 'Create custom layout' from the dropdown menu to be able to specify the partitions you need manually.
On the following page, select the partition you want and click 'Edit'. Now choose 'ext3' for the 'Format as' option, and '/' as the mount point. Click 'OK'.
You'll also need to select the swap partition and format this as 'swap', but this should affect your other distributions.
When you press the 'Forward' arrow to progress, you'll be warned about choosing 'ext3' over 'btrfs', but you can safely ignore this message.
The next step involves the boot loader installer, which will replace the one we installed with UNR. As a result, you need to add entries for both UNR and Jolicloud, if they're both installed. Click 'Add', choose the partition that's hosting the operating system and give the menu option a label.
Clicking the 'Forward' arrow will now install MeeGo. Twenty minutes later you should have a dual-or triple-booting netbook with the very latest operating systems Linux can offer.
Read More ...
Best Linux distros for netbooks
There are now some fantastic varieties of Linux that make ideal replacements for the operating system that's currently installed on your netbook. The latest version of Ubuntu UNR, for example, features brilliant hardware support alongside the expanding Unity interface, which Canonical is pinning its hopes on as a Gnome replacement for the next mainstream Ubuntu release.
Moblin and Maemo have also combined to create a new netbook operating system, known as MeeGo, and version 1.1 is a great choice if you particularly enjoy social networking through a streamlined interface.
Then there's Jolicloud to consider - a connected Linux-based operating system that blends local applications and storage seamlessly with those offered by the cloud. It's had some fantastic reviews and has been updated very recently.
Any one of these would make a great replacement for an older netbook distribution, such as Xandros or Linpus, and can even make better sense than a Windows installation if you can do without the compatibility offered by Microsoft's OS.
New Linux netbook distributions have the advantages of active community support and development, but the best thing about this list of distros is that they can all be installed on your machine at the same time. The only trick here is knowing how to do it.
1. Install USB stick

Most netbook Linux distributions use a custom utility that turns a downloaded ISO of the distribution into a USB-bootable installation. This is because most netbooks don't have an optical drive and will default to booting from a USB device if one is detected.
You'll need to go through this process for every distribution you want to install, but start with UNR.
Like Ubuntu, it first boots to a live desktop mode that you can then use to prepare your netbook's hard drive for as many distributions as you want to install on it. Before you get to that point, though, get hold of the UNR ISO and place the file on your desktop.
From an Ubuntu desktop, open the Administration menu and click 'Startup Disk Creator'. In the utility that appears, select the 'Other' button and use the file requester to find the UNR ISO.
Back in the main window, insert your USB stick and make sure it's detected in the 'Disk to use' list. You should also erase whatever files you may happen to have on the device before clicking the 'Make startup disk' button.
A few minutes later, you should see the 'Installation complete' message. It's now time to move the USB stick to your netbook.
2. Partitioning

Your netbook should automatically boot from your USB stick if you start the system with the stick installed. Booting UNR should take a few moments more, and when the main desktop appears, you can choose between 'Try Ubuntu netbook' and 'Install Ubuntu netbook'.
Because we want to use a graphical tool to repartition the internal drive first, you need to choose the first option. This will drop you onto a proper live desktop without stepping through the installation.
The tool we now need to open is the partition manager, which can be accessed by clicking the 'Applications' icon in the left border toolbar, followed by 'GParted' in the list. This should be familiar if you've ever done some partition tinkering.
Each partition is visualised within a block representing your drive, and you can click and drag on this to delete or resize your current configuration, or create a new one. You'll get the best possible results by removing all existing partitions and creating a new one for each operating system you want to install.
Make sure you select 'ext4' as the filesystem for each, and that you add a 2GB swap partition to the end of the drive. You'll obviously lose all data currently on the drive, so make sure none of it is left behind.
If you've already got Windows on the machine, it's also possible to resize its partition to make space for new ones, but you should still back up your Windows data. When you've finished making changes, click the 'Apply' icon.
3. Installing UNR

With the partitions created, click the 'Install Ubuntu Netbook' button in the top of the left panel toolbar. This will launch the same installer you might have seen when installing the desktop model of Ubuntu. The new version will even install updates in the background while you answer the simple questions asked by the installer.
The only way in which it differs from a default installation is that you need to make sure you select 'Specify partitions manually' from the second step. This ensures UNR will use the partition you've already created for it, and that it doesn't try to create its own new partition table.
From the 'Allocate drive space' window that appears, select the partition you want UNR to use, click 'Change', select 'ext4' from the Use As menu and give it a mount point of '/', which means the root partition. Now click 'Install now' to enable the installer to continue with its mission.
When it comes to installing any other distribution you want alongside UNR, you'll need to make sure you use its equivalent to the manual partition mode so that you can choose a new partition and apply a mount point for the new distribution. If you don't, there's a good chance your distribution will try to use your entire hard drive.

However, the boot menu for each distribution should be modified automatically. When the UNR installer has finished, you should find that you can restart your machine, select 'Ubuntu' from the boot menu and use your new Linux desktop. You should now attempt to set up a second distribution.
4. Installing Jolicloud 1.1

The Jolicloud ISO needs to be installed into your USB stick using another utility specific to the distribution. There are versions for Linux, Windows and OS X, and these can be found at http://help.jolicloud.com.
The Linux USB creator is a script, and after you've downloaded it you'll need to open a command line terminal, cd, to the directory where the script is located. Then type chmod +x followed by the script name to make it executable.
You need to make sure you've got the 'python-qt4' package installed, since this is used by the script to provide a GUI. Then you can run the script by typing ./scriptname within the directory containing it. This will open the application.
Then use the 'Browse' button to navigate to your Jolicloud ISO, found in your home directory, and to make sure your USB stick is correctly identified. Click 'Create' to start the process. Also ensure that the USB stick is unmounted (ejected) from the desktop.
With the ISO safely on the USB stick, you can now restart your netbook with the USB device inserted. You should see the Jolicloud boot menu; from there, select the 'Install' option. Jolicloud is based on an older version of Ubuntu - 10.04 - and as a result, you should find the installer familiar.
You'll need to choose the manual partitioning option and select another unused partition from those you created earlier, in exactly the same way you did for the UNR installation. When the process is complete, reboot your machine.
Jolicloud should now be available as another option from the boot menu.
5. Installing MeeGo

MeeGo is the product of a collaboration between Nokia and Intel, and the netbook version is closely related to the Moblin operating system. As a result, its install process is slightly different to the usual Ubuntu way of doing things.
You'll still need to boot the system from a live USB stick, but getting the installer onto that stick in the first place involves a little command line trickery. This is because MeeGo is supplied as an IMG file rather than an ISO file, so it just needs to be written, byte-for-byte, to the USB stick.
The command for doing this is dd, but there are a couple of important considerations to bear in mind.
Firstly, you'll need to know the device name of your USB stick. If you get this wrong and select something else by mistake, data will be lost. Secondly, your USB stick will also be wiped and repartitioned as part of the process. To restore it to its original capacity for future use, you'll need to use the same partition tool we used on the hard drive.
After you've grabbed the IMG file, switch to the command line and insert your USB stick. Next, type dmesg. This will print the system's log to the screen, and you'll need to look for the last few lines of output. This is where the system will have detected the USB stick you've just inserted and reported which device node it's connected to.

The output should look something like 'sdb: sdb1'. You now need to unmount this device using the unmount command, before executing dd if=meego.img of=/dev/your_ device bs=1m.
If you've hit the correct device, you should see the access LED on your stick flashing. This process may take as long as 50 minutes, depending on the speed of your USB port. After it's completed, you'll be able to remove the USB stick and switch it to your netbook for booting.
Choose the 'Install only' option when you reach the USB boot menu. The partition settings are configured on the third step of the installer, and you need to select 'Create custom layout' from the dropdown menu to be able to specify the partitions you need manually.
On the following page, select the partition you want and click 'Edit'. Now choose 'ext3' for the 'Format as' option, and '/' as the mount point. Click 'OK'.
You'll also need to select the swap partition and format this as 'swap', but this should affect your other distributions.
When you press the 'Forward' arrow to progress, you'll be warned about choosing 'ext3' over 'btrfs', but you can safely ignore this message.
The next step involves the boot loader installer, which will replace the one we installed with UNR. As a result, you need to add entries for both UNR and Jolicloud, if they're both installed. Click 'Add', choose the partition that's hosting the operating system and give the menu option a label.
Clicking the 'Forward' arrow will now install MeeGo. Twenty minutes later you should have a dual-or triple-booting netbook with the very latest operating systems Linux can offer.
Read More ...
In Depth: Where next for speech recognition on the Mac?
Ten or so years ago we imagined the future would be all about holograms, virtual reality and voice control, but now, in 2011, we've not quite reached those lofty expectations. While 3D TV is slowly filtering into the mass market and augmented reality has begun to replace the chunky headsets seen on 90s gameshows, voice control really hasn't made the mark we were expecting it to. So what is it about voice recognition that has left more of us typing than talking?
Voice recognition in a nutshell
In order to fully understand the ins and outs of voice recognition we need to look at its main uses, of which there are three distinct categories. The first is voice control; simple spoken commands that can do anything from check for new mail to switch between applications.
Voice control within Mac OS X is an assistive technology but can be used as a quick way to handle common tasks. The same technology is used for Voice Control in iOS to switch tracks, as well as by in-car stereos to control playback, phonecalls and SatNav.
Dictate the proceedings
Then there's dictation, which requires more impressive speech-recognition work. This is handled by apps from Nuance such as Dragon Dictate, which uses algorithms to learn your voice and understand what you say.
For these more advanced applications you will need a decent-quality microphone or headset and a profile will need to be created so that your unique voice patterns can be understood accurately. This also applies to apps such as Scribe from Mac Speech, which learns your voice from audio files and can transcribe audio notes you have made into text documents.
The final category has seen an increase in awareness and functionality with the rise of the iPhone and Android handsets. Apple recently acquired a company called Siri that specialises in voice search and Google already has voice search included as part of its Google apps.
Voice search, while not as technologically advanced as the dictation apps, picks out keywords from your requests and actions them based on its understanding, for example, searching for nearby restaurants. This category slightly overlaps with voice control, but with advances made by Google especially, it deserves its own category for its location-aware nature.
You may not know it, but your Mac actually has speech recognition technology installed by default. Try it out for yourself.

Head to System Preferences and click on the Speech button. From here, you can not only name your Mac in order to give it commands ("Computer, check my email" and so on) but you can also tell it to be constantly listening for your commands, so if you do need to switch apps and don't have a hand free, you can just say it out loud.
Amongst the many spoken commands a Mac will understand, you can even ask it to tell you a knock-knock joke. Just say "Tell me a joke", and your Mac will respond "Knock-knock", to which you must reply "Who's there?", and so on.
For more advanced tricks, head to the Command tab under Speech in the System Preferences pane and click the Open Speakable Items Folder. Here you will find scripts for individual actions and specific applications that you can edit and rename to suit you.
To create your own shortcuts, you can simply change the name of a script that already exists or duplicate a script and edit the contents using AppleScript. If you want to change what you need to say in order to invoke a shortcut, simply change the file name of the speakable item to anything you wish to use instead.
Applications that aren't already featured in the speakable items folder can be added, as well as shortcuts and voice commands included or made from scratch.
When used correctly, this speech recognition is a handy tool, but it's far too easy for it to mistake a command or mistake a conversation (or you talking to yourself) for a command. The outcome is a lot of repetition and accidental actions taking place.
There is an option available to turn a key on your keyboard into a kind of "push to talk" button, but using a finger to allow voice input kind of defeats the object of handsfree voice control. With a little tweaking and care, it's easy to control a number of applications and basic functions on your Mac without having to touch the mouse or keyboard, but it's certainly not perfect.
Talk to your phone
The iPhone and certain models of iPod also make use of speech recognition to change tracks, make calls and create playlists. By invoking Voice Control on the iPhone, a number of voice commands are available, much like speech recognition in Mac OS X.
Also like the Mac speech recognition software, voice control on the iPhone provides feedback to help ensure you select the correct command. As with desktop voice recognition, the iPhone's voice control can also be hit and miss, and you run a likely risk of calling the wrong person at the wrong time or playing obscure tracks from your iTunes library by accident.
With the new second microphone in the iPhone 4, audio clarity has been dramtically improved, leading to fewer mistakes, however it is still possible to make errors, especially when the headphones are plugged in.
Mac speech recognition software
For this section of the article, we thought it was only fair, while extolling the virtues of voice control and dictation, to attempt to write it using only our voice.

Making use of Dragon's Dictate software we are currently sitting in front of an iMac, looking pretty strange, speaking aloud as if to a secretary. In terms of accidents, the speech recognition in Dragon software is far more accurate, as it performs a series of tests and procedures that learn your voice and build a profile for specific uses. So, even if you have a particularly unusual voice, your dictation is surprisingly error-free.
The other benefit speech recognition offers is pace. While commands spoken to your Mac may take a few seconds to execute as the computer attempts to understand what you've said, Dictate can handle large sentences at a time.
The software provides a floating window that hovers over your currently running application and enables you to perform basic dictation as well as related tasks, such as saving files, sending email and more.
With word processing the difficulty arises in distinguishing between the words you want dictated and commands such as punctuation, therefore you have to be very careful when adding commas and full stops. As if to illustrate the point, that last sentence took a little longer than normal due to the app thinking we wanted a comma followed by the word "is" rather than the word "commas".
One of the most important things you will learn when using software such as Dictate is that you need to speak clearly but naturally, as if you were speaking to another human being. Tiny intonations in your voice and the raising and lowering of pitch give clues to the software as to what you're trying to say, especially when using words with more than one meaning.
Dictate can work with your Mac's built-in microphone or another microphone you may be using, however it's best to use the recommended hardware such as the Plantronics headset we were provided with.
Headsets with a push-to-talk or mute button are the most useful as they avoid accidental inputs if you happen to clear your throat or begin a conversation with a friend or co-worker.
The application is constantly working to learn your voice in order to provide a flawless experience, and you can return to the practice tests at any point to give it a clearer idea of the way you speak. You can also create profiles for different locations where there may be background noise, such as in an office or a coffee shop (although how many of us would want to be speaking out loud to a computer in a public place?).
Despite the comma issue (which Dictate seems to think should be "congress you") it's very easy to ramble on for hours and hours, while the application hastily notes down everything you say.
Nuance also provides a piece of software called Scribe, which does largely the same job as Dictate, except it works with audio files you have recorded previously using an iPhone or another recording device.

Again, this software has to learn your voice before it can accurately transcribe your audio file and can only do so when it has a profile created. Once complete, it's a simple process of importing your audio note, checking for errors and receiving the transcribed text.
The same applies to the Dragon Dictation app available for iPhone and iPod, which does a pretty good job of recognising your voice in real-time and saving it as text.
While Dragon is the best way we have found to control applications and accurately dictate, it doesn't provide the totally hands-free experience one might expect. While it's a great deal easier to walk around the room calmly speaking your thoughts while the computer does the work, there has to be a level of editing and adjustment before you save your final copy.
Once again, as if to illustrate the point, we just changed 'savior' to the correct 'save your' in the last line. We dictated more than 50% of this article, amounting to 1000 words or so, and found we only had to weed out a few common mistakes such as similar-sounding words, grammatical errors and missing capitalisation, but it was light work in comparison with many options we've tried before.
Get bossy
It seems that speech recognition isn't quite at the level one would expect at this stage in its development. The software understands what we are saying and can accurately transcribe those words, it can also perform basic commands based on voice input, but it's perhaps the software performing the actions rather than the engine transcribing the text that needs further development.
Rather than simply telling a computer to check for mail as you could do in the same amount of time with a mouse click, why can they not answer more complex questions such as "Do I have any important email?"
There would be more use in a method of using simple scripts along the lines of Google's priority inbox, which understands that when you say "important" you mean a specific set of contacts who may have emailed you.

The same is true of apps such as iCal, where currently scheduling meetings or events isn't as simple as one might think. What if you were able to say to your computer: "Set lunch with Dave tomorrow at two" and the computer understood your command, set the calendar date, emailed Dave and even went ahead and reserved a table at your favourite restaurant using an online booking form.
The technology exists, it's just about how it's applied. And here is where the crossover between desktop and mobile voice recognition is making the biggest difference.
Voice Search
Google search facilities get better and better with each update and now, via the iPhone and Android handsets, it can provide search results based on a spoken question, taking into account your location and preferences. This is as close to true voice control as we have ever been.
Siri performed a similar job on the iPhone then mysteriously disappeared from the App Store before the announcement was made that Apple had bought the company. Following its public spats with the search-engine giant, Apple is unlikely to continue using Google's search, maps and voice recognition tools, but sees the major benefits voice recognition offers mobile phone users, hence this acquisition.
Perhaps smartphones and the explosion of powerful GPS-enabled devices is exactly what the speech-recognition industry needs - an injection of awareness to bring it into the mass-market. As the world becomes increasingly mobile with iPads and iPhones taking on more of the daily burden traditionally consumed by laptops and netbooks, speech recognition is a much-needed tool, the popularity of which is likely to increase.
It won't be long before a synced phone mounted in a vehicle will respond to voice controls as standard and companies such as Ford, with its Voice Activated Sync, are leading the way. This control of devices through voice is not only convenient, but a serious safety measure to counteract the dangers of using a phone while driving.
Fancy talk or careless whispers?
With the many benefits of speech recognition, it seems strange that it hasn't quite taken off in the way some would have expected. But it appears that things are now beginning to change.
On the desktop, it seems that voice recognition is likely to remain limited to just dictation apps, however the mobile platform is where more exciting voice-recognition apps are beginning to emerge.
To control your computer with your voice isn't quite as natural as some might think, and without 100% accuracy leads to too many time-consuming errors. The fact is you always need to use a keyboard even if you can do the majority of tasks with just your voice, and therefore voice recognition will never truly rule as an input method.
As smartphones become more powerful and more like computers, they become the ideal tools for voice-recognition software. And when combined with a search engine such as Google's Voice Search, keyboards could almost become a thing of the past.
If it weren't for games, perhaps a manufacturer would have already attempted a completely voice-controlled device?
In a way, Apple already has, with its almost buttonless iPod shuffle. The latest shuffle still offers voice control, however buttons were reintroduced after a lack of interest from consumers in a solely voice-controlled product.
"People clearly missed the buttons," said Steve Jobs at the time. Perhaps none of us want to be limited in control options; perhaps we're a little too shy to tell our electronic devices what to do in public.
We certainly felt a silly during the writing of this feature as we babbled away into a microphone while others looked on quizzically. Ultimately, it comes down to adoption and a sense of 'normality' from technology.
Remember, handsfree calling was once a niche feature but is now widely accepted, even if users do appear to be talking to themselves. For voice control and speech recognition, the same is true. If telling a device what to do with your voice becomes the standard, more and more people will start giving their fingers a rest.
Read More ...
In Depth: Where next for speech recognition on the Mac?
Ten or so years ago we imagined the future would be all about holograms, virtual reality and voice control, but now, in 2011, we've not quite reached those lofty expectations. While 3D TV is slowly filtering into the mass market and augmented reality has begun to replace the chunky headsets seen on 90s gameshows, voice control really hasn't made the mark we were expecting it to. So what is it about voice recognition that has left more of us typing than talking?
Voice recognition in a nutshell
In order to fully understand the ins and outs of voice recognition we need to look at its main uses, of which there are three distinct categories. The first is voice control; simple spoken commands that can do anything from check for new mail to switch between applications.
Voice control within Mac OS X is an assistive technology but can be used as a quick way to handle common tasks. The same technology is used for Voice Control in iOS to switch tracks, as well as by in-car stereos to control playback, phonecalls and SatNav.
Dictate the proceedings
Then there's dictation, which requires more impressive speech-recognition work. This is handled by apps from Nuance such as Dragon Dictate, which uses algorithms to learn your voice and understand what you say.
For these more advanced applications you will need a decent-quality microphone or headset and a profile will need to be created so that your unique voice patterns can be understood accurately. This also applies to apps such as Scribe from Mac Speech, which learns your voice from audio files and can transcribe audio notes you have made into text documents.
The final category has seen an increase in awareness and functionality with the rise of the iPhone and Android handsets. Apple recently acquired a company called Siri that specialises in voice search and Google already has voice search included as part of its Google apps.
Voice search, while not as technologically advanced as the dictation apps, picks out keywords from your requests and actions them based on its understanding, for example, searching for nearby restaurants. This category slightly overlaps with voice control, but with advances made by Google especially, it deserves its own category for its location-aware nature.
You may not know it, but your Mac actually has speech recognition technology installed by default. Try it out for yourself.

Head to System Preferences and click on the Speech button. From here, you can not only name your Mac in order to give it commands ("Computer, check my email" and so on) but you can also tell it to be constantly listening for your commands, so if you do need to switch apps and don't have a hand free, you can just say it out loud.
Amongst the many spoken commands a Mac will understand, you can even ask it to tell you a knock-knock joke. Just say "Tell me a joke", and your Mac will respond "Knock-knock", to which you must reply "Who's there?", and so on.
For more advanced tricks, head to the Command tab under Speech in the System Preferences pane and click the Open Speakable Items Folder. Here you will find scripts for individual actions and specific applications that you can edit and rename to suit you.
To create your own shortcuts, you can simply change the name of a script that already exists or duplicate a script and edit the contents using AppleScript. If you want to change what you need to say in order to invoke a shortcut, simply change the file name of the speakable item to anything you wish to use instead.
Applications that aren't already featured in the speakable items folder can be added, as well as shortcuts and voice commands included or made from scratch.
When used correctly, this speech recognition is a handy tool, but it's far too easy for it to mistake a command or mistake a conversation (or you talking to yourself) for a command. The outcome is a lot of repetition and accidental actions taking place.
There is an option available to turn a key on your keyboard into a kind of "push to talk" button, but using a finger to allow voice input kind of defeats the object of handsfree voice control. With a little tweaking and care, it's easy to control a number of applications and basic functions on your Mac without having to touch the mouse or keyboard, but it's certainly not perfect.
Talk to your phone
The iPhone and certain models of iPod also make use of speech recognition to change tracks, make calls and create playlists. By invoking Voice Control on the iPhone, a number of voice commands are available, much like speech recognition in Mac OS X.
Also like the Mac speech recognition software, voice control on the iPhone provides feedback to help ensure you select the correct command. As with desktop voice recognition, the iPhone's voice control can also be hit and miss, and you run a likely risk of calling the wrong person at the wrong time or playing obscure tracks from your iTunes library by accident.
With the new second microphone in the iPhone 4, audio clarity has been dramtically improved, leading to fewer mistakes, however it is still possible to make errors, especially when the headphones are plugged in.
Mac speech recognition software
For this section of the article, we thought it was only fair, while extolling the virtues of voice control and dictation, to attempt to write it using only our voice.

Making use of Dragon's Dictate software we are currently sitting in front of an iMac, looking pretty strange, speaking aloud as if to a secretary. In terms of accidents, the speech recognition in Dragon software is far more accurate, as it performs a series of tests and procedures that learn your voice and build a profile for specific uses. So, even if you have a particularly unusual voice, your dictation is surprisingly error-free.
The other benefit speech recognition offers is pace. While commands spoken to your Mac may take a few seconds to execute as the computer attempts to understand what you've said, Dictate can handle large sentences at a time.
The software provides a floating window that hovers over your currently running application and enables you to perform basic dictation as well as related tasks, such as saving files, sending email and more.
With word processing the difficulty arises in distinguishing between the words you want dictated and commands such as punctuation, therefore you have to be very careful when adding commas and full stops. As if to illustrate the point, that last sentence took a little longer than normal due to the app thinking we wanted a comma followed by the word "is" rather than the word "commas".
One of the most important things you will learn when using software such as Dictate is that you need to speak clearly but naturally, as if you were speaking to another human being. Tiny intonations in your voice and the raising and lowering of pitch give clues to the software as to what you're trying to say, especially when using words with more than one meaning.
Dictate can work with your Mac's built-in microphone or another microphone you may be using, however it's best to use the recommended hardware such as the Plantronics headset we were provided with.
Headsets with a push-to-talk or mute button are the most useful as they avoid accidental inputs if you happen to clear your throat or begin a conversation with a friend or co-worker.
The application is constantly working to learn your voice in order to provide a flawless experience, and you can return to the practice tests at any point to give it a clearer idea of the way you speak. You can also create profiles for different locations where there may be background noise, such as in an office or a coffee shop (although how many of us would want to be speaking out loud to a computer in a public place?).
Despite the comma issue (which Dictate seems to think should be "congress you") it's very easy to ramble on for hours and hours, while the application hastily notes down everything you say.
Nuance also provides a piece of software called Scribe, which does largely the same job as Dictate, except it works with audio files you have recorded previously using an iPhone or another recording device.

Again, this software has to learn your voice before it can accurately transcribe your audio file and can only do so when it has a profile created. Once complete, it's a simple process of importing your audio note, checking for errors and receiving the transcribed text.
The same applies to the Dragon Dictation app available for iPhone and iPod, which does a pretty good job of recognising your voice in real-time and saving it as text.
While Dragon is the best way we have found to control applications and accurately dictate, it doesn't provide the totally hands-free experience one might expect. While it's a great deal easier to walk around the room calmly speaking your thoughts while the computer does the work, there has to be a level of editing and adjustment before you save your final copy.
Once again, as if to illustrate the point, we just changed 'savior' to the correct 'save your' in the last line. We dictated more than 50% of this article, amounting to 1000 words or so, and found we only had to weed out a few common mistakes such as similar-sounding words, grammatical errors and missing capitalisation, but it was light work in comparison with many options we've tried before.
Get bossy
It seems that speech recognition isn't quite at the level one would expect at this stage in its development. The software understands what we are saying and can accurately transcribe those words, it can also perform basic commands based on voice input, but it's perhaps the software performing the actions rather than the engine transcribing the text that needs further development.
Rather than simply telling a computer to check for mail as you could do in the same amount of time with a mouse click, why can they not answer more complex questions such as "Do I have any important email?"
There would be more use in a method of using simple scripts along the lines of Google's priority inbox, which understands that when you say "important" you mean a specific set of contacts who may have emailed you.

The same is true of apps such as iCal, where currently scheduling meetings or events isn't as simple as one might think. What if you were able to say to your computer: "Set lunch with Dave tomorrow at two" and the computer understood your command, set the calendar date, emailed Dave and even went ahead and reserved a table at your favourite restaurant using an online booking form.
The technology exists, it's just about how it's applied. And here is where the crossover between desktop and mobile voice recognition is making the biggest difference.
Voice Search
Google search facilities get better and better with each update and now, via the iPhone and Android handsets, it can provide search results based on a spoken question, taking into account your location and preferences. This is as close to true voice control as we have ever been.
Siri performed a similar job on the iPhone then mysteriously disappeared from the App Store before the announcement was made that Apple had bought the company. Following its public spats with the search-engine giant, Apple is unlikely to continue using Google's search, maps and voice recognition tools, but sees the major benefits voice recognition offers mobile phone users, hence this acquisition.
Perhaps smartphones and the explosion of powerful GPS-enabled devices is exactly what the speech-recognition industry needs - an injection of awareness to bring it into the mass-market. As the world becomes increasingly mobile with iPads and iPhones taking on more of the daily burden traditionally consumed by laptops and netbooks, speech recognition is a much-needed tool, the popularity of which is likely to increase.
It won't be long before a synced phone mounted in a vehicle will respond to voice controls as standard and companies such as Ford, with its Voice Activated Sync, are leading the way. This control of devices through voice is not only convenient, but a serious safety measure to counteract the dangers of using a phone while driving.
Fancy talk or careless whispers?
With the many benefits of speech recognition, it seems strange that it hasn't quite taken off in the way some would have expected. But it appears that things are now beginning to change.
On the desktop, it seems that voice recognition is likely to remain limited to just dictation apps, however the mobile platform is where more exciting voice-recognition apps are beginning to emerge.
To control your computer with your voice isn't quite as natural as some might think, and without 100% accuracy leads to too many time-consuming errors. The fact is you always need to use a keyboard even if you can do the majority of tasks with just your voice, and therefore voice recognition will never truly rule as an input method.
As smartphones become more powerful and more like computers, they become the ideal tools for voice-recognition software. And when combined with a search engine such as Google's Voice Search, keyboards could almost become a thing of the past.
If it weren't for games, perhaps a manufacturer would have already attempted a completely voice-controlled device?
In a way, Apple already has, with its almost buttonless iPod shuffle. The latest shuffle still offers voice control, however buttons were reintroduced after a lack of interest from consumers in a solely voice-controlled product.
"People clearly missed the buttons," said Steve Jobs at the time. Perhaps none of us want to be limited in control options; perhaps we're a little too shy to tell our electronic devices what to do in public.
We certainly felt a silly during the writing of this feature as we babbled away into a microphone while others looked on quizzically. Ultimately, it comes down to adoption and a sense of 'normality' from technology.
Remember, handsfree calling was once a niche feature but is now widely accepted, even if users do appear to be talking to themselves. For voice control and speech recognition, the same is true. If telling a device what to do with your voice becomes the standard, more and more people will start giving their fingers a rest.
Read More ...
In Depth: Where next for speech recognition on the Mac?
Ten or so years ago we imagined the future would be all about holograms, virtual reality and voice control, but now, in 2011, we've not quite reached those lofty expectations. While 3D TV is slowly filtering into the mass market and augmented reality has begun to replace the chunky headsets seen on 90s gameshows, voice control really hasn't made the mark we were expecting it to. So what is it about voice recognition that has left more of us typing than talking?
Voice recognition in a nutshell
In order to fully understand the ins and outs of voice recognition we need to look at its main uses, of which there are three distinct categories. The first is voice control; simple spoken commands that can do anything from check for new mail to switch between applications.
Voice control within Mac OS X is an assistive technology but can be used as a quick way to handle common tasks. The same technology is used for Voice Control in iOS to switch tracks, as well as by in-car stereos to control playback, phonecalls and SatNav.
Dictate the proceedings
Then there's dictation, which requires more impressive speech-recognition work. This is handled by apps from Nuance such as Dragon Dictate, which uses algorithms to learn your voice and understand what you say.
For these more advanced applications you will need a decent-quality microphone or headset and a profile will need to be created so that your unique voice patterns can be understood accurately. This also applies to apps such as Scribe from Mac Speech, which learns your voice from audio files and can transcribe audio notes you have made into text documents.
The final category has seen an increase in awareness and functionality with the rise of the iPhone and Android handsets. Apple recently acquired a company called Siri that specialises in voice search and Google already has voice search included as part of its Google apps.
Voice search, while not as technologically advanced as the dictation apps, picks out keywords from your requests and actions them based on its understanding, for example, searching for nearby restaurants. This category slightly overlaps with voice control, but with advances made by Google especially, it deserves its own category for its location-aware nature.
You may not know it, but your Mac actually has speech recognition technology installed by default. Try it out for yourself.

Head to System Preferences and click on the Speech button. From here, you can not only name your Mac in order to give it commands ("Computer, check my email" and so on) but you can also tell it to be constantly listening for your commands, so if you do need to switch apps and don't have a hand free, you can just say it out loud.
Amongst the many spoken commands a Mac will understand, you can even ask it to tell you a knock-knock joke. Just say "Tell me a joke", and your Mac will respond "Knock-knock", to which you must reply "Who's there?", and so on.
For more advanced tricks, head to the Command tab under Speech in the System Preferences pane and click the Open Speakable Items Folder. Here you will find scripts for individual actions and specific applications that you can edit and rename to suit you.
To create your own shortcuts, you can simply change the name of a script that already exists or duplicate a script and edit the contents using AppleScript. If you want to change what you need to say in order to invoke a shortcut, simply change the file name of the speakable item to anything you wish to use instead.
Applications that aren't already featured in the speakable items folder can be added, as well as shortcuts and voice commands included or made from scratch.
When used correctly, this speech recognition is a handy tool, but it's far too easy for it to mistake a command or mistake a conversation (or you talking to yourself) for a command. The outcome is a lot of repetition and accidental actions taking place.
There is an option available to turn a key on your keyboard into a kind of "push to talk" button, but using a finger to allow voice input kind of defeats the object of handsfree voice control. With a little tweaking and care, it's easy to control a number of applications and basic functions on your Mac without having to touch the mouse or keyboard, but it's certainly not perfect.
Talk to your phone
The iPhone and certain models of iPod also make use of speech recognition to change tracks, make calls and create playlists. By invoking Voice Control on the iPhone, a number of voice commands are available, much like speech recognition in Mac OS X.
Also like the Mac speech recognition software, voice control on the iPhone provides feedback to help ensure you select the correct command. As with desktop voice recognition, the iPhone's voice control can also be hit and miss, and you run a likely risk of calling the wrong person at the wrong time or playing obscure tracks from your iTunes library by accident.
With the new second microphone in the iPhone 4, audio clarity has been dramtically improved, leading to fewer mistakes, however it is still possible to make errors, especially when the headphones are plugged in.
Mac speech recognition software
For this section of the article, we thought it was only fair, while extolling the virtues of voice control and dictation, to attempt to write it using only our voice.

Making use of Dragon's Dictate software we are currently sitting in front of an iMac, looking pretty strange, speaking aloud as if to a secretary. In terms of accidents, the speech recognition in Dragon software is far more accurate, as it performs a series of tests and procedures that learn your voice and build a profile for specific uses. So, even if you have a particularly unusual voice, your dictation is surprisingly error-free.
The other benefit speech recognition offers is pace. While commands spoken to your Mac may take a few seconds to execute as the computer attempts to understand what you've said, Dictate can handle large sentences at a time.
The software provides a floating window that hovers over your currently running application and enables you to perform basic dictation as well as related tasks, such as saving files, sending email and more.
With word processing the difficulty arises in distinguishing between the words you want dictated and commands such as punctuation, therefore you have to be very careful when adding commas and full stops. As if to illustrate the point, that last sentence took a little longer than normal due to the app thinking we wanted a comma followed by the word "is" rather than the word "commas".
One of the most important things you will learn when using software such as Dictate is that you need to speak clearly but naturally, as if you were speaking to another human being. Tiny intonations in your voice and the raising and lowering of pitch give clues to the software as to what you're trying to say, especially when using words with more than one meaning.
Dictate can work with your Mac's built-in microphone or another microphone you may be using, however it's best to use the recommended hardware such as the Plantronics headset we were provided with.
Headsets with a push-to-talk or mute button are the most useful as they avoid accidental inputs if you happen to clear your throat or begin a conversation with a friend or co-worker.
The application is constantly working to learn your voice in order to provide a flawless experience, and you can return to the practice tests at any point to give it a clearer idea of the way you speak. You can also create profiles for different locations where there may be background noise, such as in an office or a coffee shop (although how many of us would want to be speaking out loud to a computer in a public place?).
Despite the comma issue (which Dictate seems to think should be "congress you") it's very easy to ramble on for hours and hours, while the application hastily notes down everything you say.
Nuance also provides a piece of software called Scribe, which does largely the same job as Dictate, except it works with audio files you have recorded previously using an iPhone or another recording device.

Again, this software has to learn your voice before it can accurately transcribe your audio file and can only do so when it has a profile created. Once complete, it's a simple process of importing your audio note, checking for errors and receiving the transcribed text.
The same applies to the Dragon Dictation app available for iPhone and iPod, which does a pretty good job of recognising your voice in real-time and saving it as text.
While Dragon is the best way we have found to control applications and accurately dictate, it doesn't provide the totally hands-free experience one might expect. While it's a great deal easier to walk around the room calmly speaking your thoughts while the computer does the work, there has to be a level of editing and adjustment before you save your final copy.
Once again, as if to illustrate the point, we just changed 'savior' to the correct 'save your' in the last line. We dictated more than 50% of this article, amounting to 1000 words or so, and found we only had to weed out a few common mistakes such as similar-sounding words, grammatical errors and missing capitalisation, but it was light work in comparison with many options we've tried before.
Get bossy
It seems that speech recognition isn't quite at the level one would expect at this stage in its development. The software understands what we are saying and can accurately transcribe those words, it can also perform basic commands based on voice input, but it's perhaps the software performing the actions rather than the engine transcribing the text that needs further development.
Rather than simply telling a computer to check for mail as you could do in the same amount of time with a mouse click, why can they not answer more complex questions such as "Do I have any important email?"
There would be more use in a method of using simple scripts along the lines of Google's priority inbox, which understands that when you say "important" you mean a specific set of contacts who may have emailed you.

The same is true of apps such as iCal, where currently scheduling meetings or events isn't as simple as one might think. What if you were able to say to your computer: "Set lunch with Dave tomorrow at two" and the computer understood your command, set the calendar date, emailed Dave and even went ahead and reserved a table at your favourite restaurant using an online booking form.
The technology exists, it's just about how it's applied. And here is where the crossover between desktop and mobile voice recognition is making the biggest difference.
Voice Search
Google search facilities get better and better with each update and now, via the iPhone and Android handsets, it can provide search results based on a spoken question, taking into account your location and preferences. This is as close to true voice control as we have ever been.
Siri performed a similar job on the iPhone then mysteriously disappeared from the App Store before the announcement was made that Apple had bought the company. Following its public spats with the search-engine giant, Apple is unlikely to continue using Google's search, maps and voice recognition tools, but sees the major benefits voice recognition offers mobile phone users, hence this acquisition.
Perhaps smartphones and the explosion of powerful GPS-enabled devices is exactly what the speech-recognition industry needs - an injection of awareness to bring it into the mass-market. As the world becomes increasingly mobile with iPads and iPhones taking on more of the daily burden traditionally consumed by laptops and netbooks, speech recognition is a much-needed tool, the popularity of which is likely to increase.
It won't be long before a synced phone mounted in a vehicle will respond to voice controls as standard and companies such as Ford, with its Voice Activated Sync, are leading the way. This control of devices through voice is not only convenient, but a serious safety measure to counteract the dangers of using a phone while driving.
Fancy talk or careless whispers?
With the many benefits of speech recognition, it seems strange that it hasn't quite taken off in the way some would have expected. But it appears that things are now beginning to change.
On the desktop, it seems that voice recognition is likely to remain limited to just dictation apps, however the mobile platform is where more exciting voice-recognition apps are beginning to emerge.
To control your computer with your voice isn't quite as natural as some might think, and without 100% accuracy leads to too many time-consuming errors. The fact is you always need to use a keyboard even if you can do the majority of tasks with just your voice, and therefore voice recognition will never truly rule as an input method.
As smartphones become more powerful and more like computers, they become the ideal tools for voice-recognition software. And when combined with a search engine such as Google's Voice Search, keyboards could almost become a thing of the past.
If it weren't for games, perhaps a manufacturer would have already attempted a completely voice-controlled device?
In a way, Apple already has, with its almost buttonless iPod shuffle. The latest shuffle still offers voice control, however buttons were reintroduced after a lack of interest from consumers in a solely voice-controlled product.
"People clearly missed the buttons," said Steve Jobs at the time. Perhaps none of us want to be limited in control options; perhaps we're a little too shy to tell our electronic devices what to do in public.
We certainly felt a silly during the writing of this feature as we babbled away into a microphone while others looked on quizzically. Ultimately, it comes down to adoption and a sense of 'normality' from technology.
Remember, handsfree calling was once a niche feature but is now widely accepted, even if users do appear to be talking to themselves. For voice control and speech recognition, the same is true. If telling a device what to do with your voice becomes the standard, more and more people will start giving their fingers a rest.
Read More ...
Review: Swype
One of the benefits of Android over iOS is that Google is quite happy to let you rip bits out of it and replace them with your own, if you choose. Swype does this for the built-in keyboard, and it takes roughly 0.4 picoseconds for you to start wondering how you ever lived without it. You can still tap out words a letter at a time, so you're not losing out on any features. What you gain is the ability to form words by simply dragging your finger between the letters, with Swype instantly working out what you mean. It works amazingly – magically for words in its dictionary.
There's no need for pixel-perfect precision, as is often the case when tapping at individual keys, or worrying about details like whether you should leave your finger hovering over a letter for longer in words like 'bubble'. Most of the time, Swype will work it out without a fuss.
If it's not sure, it pops up a menu with the most likely contenders – 'time' for instance might also be 'tinge' – which is still no slower than having typed it. The increase in input speed is incredible, making it a breeze to write emails, shoot off a quick tweet or respond to an SMS.
Best of all, this keyboard appears in absolutely every app you use, slotting in where the now officially rubbish Android default used to be. No software updates. No incompatibilities.
If you're using Android, install this now. You need it. You'll never look back. It's not on the Market yet, so visit www.swypeinc.com to sign up for the beta and download it to your phone immediately.
Related Links
Read More ...
Review: Swype
One of the benefits of Android over iOS is that Google is quite happy to let you rip bits out of it and replace them with your own, if you choose. Swype does this for the built-in keyboard, and it takes roughly 0.4 picoseconds for you to start wondering how you ever lived without it. You can still tap out words a letter at a time, so you're not losing out on any features. What you gain is the ability to form words by simply dragging your finger between the letters, with Swype instantly working out what you mean. It works amazingly – magically for words in its dictionary.
There's no need for pixel-perfect precision, as is often the case when tapping at individual keys, or worrying about details like whether you should leave your finger hovering over a letter for longer in words like 'bubble'. Most of the time, Swype will work it out without a fuss.
If it's not sure, it pops up a menu with the most likely contenders – 'time' for instance might also be 'tinge' – which is still no slower than having typed it. The increase in input speed is incredible, making it a breeze to write emails, shoot off a quick tweet or respond to an SMS.
Best of all, this keyboard appears in absolutely every app you use, slotting in where the now officially rubbish Android default used to be. No software updates. No incompatibilities.
If you're using Android, install this now. You need it. You'll never look back. It's not on the Market yet, so visit www.swypeinc.com to sign up for the beta and download it to your phone immediately.
Related Links
Read More ...


No comments:
Post a Comment