In Depth: How to develop for Linux
A quick poke around freshmeat.net is testament to this, with its rich tapestry of useful, wild, and often wacky apps. Distros such as Debian, Fedora and Ubuntu bring these applications to their users with a quick apt-get or a yum install.
As this work was going on, Apple released the iPhone and made a major breakthrough in computing; it made the application developer platform attractive, consistent and accessible to consumers. Anyone with an iPhone could download a range of apps, many simple and silly, but all consistent in their presentation, and often new and innovative in how they used features of the phone such as the GPS, accelerometer and other facilities.
Apple built this platform from scratch, but thanks to the work put in by the GNU project and countless others, we already have a platform. We have a rich set of development tools, a range of desktop environments and a wide range of development forges packed with source control, bug tracking and other features.
Let's get better
While we have had the tools for a long time, what we have done less of a good job at is pulling these tools together into a consistent experience aimed at application authors. This is something that the Ubuntu community is working on, and we're going to look at some of this work and how it's useful for the wider open source ecosystem – that includes you too, non-Ubuntu fans!
Developers are funny beasts. Although from the outside they seem like a fairly consistent menagerie of code-writing, beer-drinking, pizza-eating creatures, their motivation and desire around the art of writing programs varies.
Within this variety though we can discern two sub-sets: systematic and opportunistic developers. Both write code, but each motivates themselves and approaches their work in slightly different ways.
Systematic developers are typically professional developers. They're the kind of people who will write a feature, and then immediately document it, write an automated test suite for the feature and will always be thinking structurally about their application today and how it will scale and grow with future considerations.
Systematic developers are the very definition of professionals, and they are commonly conservative, cautious and resistant to adding features unless they are implemented with completeness and precision. Systematic developers commonly commit themselves to a single project for long periods of time and often grow significant generalist knowledge of the codebase, becoming almost tribal elders in their respective projects.
Opportunity hacks
Opportunistic developers are the inverse of many of the properties of a systematic developer.
Opportunistic developers are interrupt-driven hackers who are in the business of scratching itches. They feel a problem or smell an opportunity and will often have a first cut of code ready within a matter of hours. They are often much more shoot-from-the-hip types of folks, their code is sometimes not well commented, and unit tests are often a low-priority item on a to-do list somewhere, but these attributes don't necessarily mean they are inferior coders.
They are often excellent coders, but they are reactive, energetic hackers who love to solve problems that they feel personally and are passionate about.
Opportunistic developers are the very lifeblood of Linux. When we talk about the basic building blocks of open source we often talk about 'scratching your itch', this is what opportunistic developers do. Our goal in the Ubuntu world, and the goal of many others, is to ensure that the barriers to itch-scratching are as low as possible.
Optimising for opportunity
When developers want to produce software they enter into a four-step process: Imagine > Create > Collaborate > Publish. This broad high-level set of steps can be broken down into a more detailed set of steps, which break down into the following elements:
DEVELOPING STEPS: The common steps involved in building a Linux application.
1. Ideas: This matches the Imagine step of the higher-level process; thinking of ideas of software to work on.
2. Gnome/KDE: This matches the Create step of the higher-level process; using a preexisting development platform to create your application with.
3. Launchpad/BZR: This matches the Collaborate step of the higher-level process; using Launchpad and Bazaar to work with other developers to make the application better.
4. Debian Packaging/PPA: This matches the Publish step of the higher-level process; packaging your application and then publishing it to a Personal Package Archive, which enables others to download and install it like any other package.
Let's now take a look into some of the work and projects that have been going on to simplify and improve this process.
Imagine
The very first step is to imagine a solution to a given problem.
At this very first stage the developer needs to feel empowered to have the motivation, tools, skills, and determination to implement the vision that they see in their minds. Although this sounds like a simple first step, it is a challenging one.
To optimise it there needs to be a wealth of positive stories of how developers have dreamed up solutions to problems and effortlessly implemented them because the platform was a help rather than a hindrance.
In the Ubuntu world we have tried to build an atmosphere around the concept of Ubuntu providing a complete and comprehensive platform for implementing whatever solution the developer dreams of. We have done this by organising events such as Ubuntu Application Developer Week and creating support resources such as the #ubuntu-appdevel IRC channel on the Freenode IRC service.
With the motivational element of opportunistic developers a story of encouragement and outreach, the following three steps in the four-step process are very much about technology, and the goal is about lowering the barriers to get people up and running as quickly and easily as possible.
Create
Over the years a vibrant developer community has formed, complete with a vast array of tools, languages and functionality. Unfortunately, while powerful, many of these tools are awkwardly complex, and many developers have let their ideas and creativity get buried under an avalanche of confusion around how these tools fit together.
Part of the cause of this problem is that many developer tools only cater to systematic developers; the kind of codewriting workaholics we mentioned earlier who hack for a living, with a fervent attention to detail backed up by unit tests and other hallmarks of the professional programmer.
For many opportunistic developers, if the tools needed to scratch their itch require too much effort or investigation, the itch can quickly disappear and what was once a creatively excited hacker has now moved on to be a couch-bound excitable PlayStation gamer who grew bored with Linux as a platform.
A solution to this overt complexity in the toolchain was a simple tool called Quickly. Quickly was the brainchild of the now director of Ubuntu engineering at Canonical, Rick Spencer. Quickly gets you up and running quickly (it's not just a clever name) writing an application from scratch.
Traditionally, writing desktop applications has involved a not-insignificant amount of faffing required with build systems, source control, packaging frameworks, graphical interface tools and other things that get in the way of writing code. Quickly is a tool that simplifies how these different things fit together.
Quickly provides a framework with a series of templates for creating different types of applications. With each template a series of decisions are made about the tools involved in creating that application. By far the most popular template, and the one that Quickly itself was created to satisfy, is the Ubuntu template.
This template uses a set of tools that has become hugely popular in modern desktop software development, and tools we have harnessed in Ubuntu. They are:
Python: A simple, easy-to-learn, flexible and efficient highlevel language.
GTK: A comprehensive and powerful graphical toolkit for creating applications, and the foundation of the Gnome desktop environment.
Gnome: The desktop environment that ships with Ubuntu, offering many integration facilities.
Glade: An application for developing user interfaces quickly and easily, which can then be loaded right into your Python programs.
GStreamer: A powerful but deliciously simple framework for playing back and creating audio, video and other multimedia content.
DesktopCouch: A framework for saving content in a database that is fast and efficient, hooks neatly into Ubuntu One and is awesome for replication.
Gedit: For editing code, Quickly assumes you are going to use the text editor that ships with Ubuntu, which provides a simple and flexible interface for writing your programs.
With this core set of tools you can write any application you could imagine and know that it will run effortlessly on Ubuntu and other distributions. The elegance of Quickly is that it understands a common platform for Linux but does not try to complicate the desire for simplicity by being tempted down the slippery slope of investing months of energy into an Interactive Development Environment (IDE), when many Linux users are in fact comfortable with the command line.
Collaborate
While Quickly is fantastic for getting users up and running with a new application, there is a much wider challenge around how developers can collaborate together around code. Producing software and providing an environment in which contributors can work together on it requires a large number of tools and the integration of those tools.
In the bad old days of open source it was a nightmare to set up and integrate these tools, but these days we have a variety of free websites with ready-to-roll development environments for creating and maintaining open source projects easily. One such example is Launchpad.
LAUNCHPAD: Launchpad is Ubuntu's site for hosting code and fixing bugs, and is where you should upload your new app.
Launchpad is a powerful, simple and comprehensive development forge that has become hugely popular over the last few years (it currently sports over 17,000 projects) and is right at the heart of how Ubuntu is developed. The site provides a range of useful facilities:
Code hosting: Launchpad knits together neatly with the popular Bazaar distribution version control system. Together they provide a fantastic method for contributors to work together on code, merge changes into a main code base and host code online.
Bug tracking: Although traditionally a complex and ugly part of software development, the bug tracker built into Launchpad is simple and effective.
Translations: Many projects struggle with providing multiple language support, but Launchpad provides a simple means for anyone who knows more than one language to translate applications without ever touching code.
Blueprints: This feature provides a means to produce specifications for ideas and features in your project.
Support: Launchpad provides a question-and-answer facility that is well suited to give support for your users.
Package building: A hugely popular feature in Launchpad is the ability to have your very own Personal Package Archive (PPA), which enables you to automatically build and deliver Ubuntu packages to your users.
If you want to find out more about the range of facilities in Launchpad, you should take a look at the online tour at https://launchpad.net/+tour.
Launchpad offers a simple and effective experience for creating applications, and much of its simplicity is in how the different components of Launchpad can link together. As an example, you can create a blueprint and specific, attach bugs to it, attach Bazaar branches to bugs and more. This interconnection of information helps simplify visibility of information and ensure that developers always know what is going on.
Launchpad is not perfect though, and some developers have tried to simplify its use in application development. One such example is the way that Quickly enables you to publish to Launchpad (more on this later). Another interesting example is a tool called Ground Control by Ubuntu community member Martin Owens.
GROUND CONTROL:A more specific set of steps to produce an app for Linux.
Ground Control takes an innovative approach in turning your file manager (Nautilus) into your development environment. Imagine you want to fix a bug. The process for fixing a bug is typically the one highlighted on the opposite page. It works like this:
Choose a bug to fix: You find a bug on Launchpad that irritates you enough that you want to fix it.
bzr branch: Download the code for the project that's afflicted by the bug.
Fix bug: Perform the fix in your local branch of the code.
bzr commit: You commit the fix to your local branch, ready to push.
bzr push You push the code to Launchpad so the maintainer of the application can take your fix and apply it.
Attach branch to bug report: For completeness, you attach the branch to the bug report. This ensures that anyone subscribed to the bug report is aware of the fix.
Propose for merge: You then follow the Launchpad 'Propose Merge' process in which you notify the original developer of the fix so he/she can review it and merge it in if suitable.
When you are a new developer starting this process, all those commands and the correct order and syntax can be a little confusing. Many developers have gone so far as to create a sticky note outlining the process until it becomes rote.
Martin Owens' Ground Control project provides an entirely graphical way of performing the same process...all within a file manager. The way it works is that you load up Nautilus and browse to a Projects directory in your home directory. In there is a 'Fetch Project' button. Clicking on it pops up a dialog box in which you can search for a project (for example the Ground Control project mentioned earlier).
When you perform the search a list of matching projects will be displayed, and you can click on one to select it. Doing this creates a new folder in the Projects directory in Nautilus with the same name as the project (eg groundcontrol).
If you click inside this new folder another button called 'Fix Bug' appears. Clicking it pops up another search dialog box which enables you to search for a bug number or bug search term inside that project. When you search, a range of bugs are displayed, and you can double-click on one to grab the latest code from Launchpad and automatically create a folder called bugfix-lp-123456.
QT Creator: The KDE team have an excellent set of Qt development tools for building apps.
Ground control
You now go and hack on the code in that folder and fix the bug in question. When you have changed some of the files in that folder a new button appears in Nautilus called Upload Fix. Clicking that button opens a new dialog box where you can describe the changes you made to the code.
Clicking OK then pops up a final dialog box asking you to enter a merge message (this is the message that you send the developer asking them to merge your bug fix into the main code). When you click on OK, your changes are pushed to Launchpad, the branch is attached to the bug report in question, and a merge proposal is automatically made.
The entire process simply involves clicking buttons in a logical set of events, and at no point do you ever need to enter a command or create a note to remind you of the process. Projects such as Ground Control demonstrate the desire to simplify the process of collaborating on development, and the project was made possible by the flexibility of the Launchpad API, which enables developers to provide alternative interfaces to the date inside Launchpad.
Publish
With a simple method of creating applications, and a simple method of collaborating around applications, the next step is to get your application into the hands of users. This process is typically broken into two steps:
Packaging the application: Making the installation and removal of the application compatible with different distributions by using either the Debian packaging system (Deb), Red Hat Package Manager (RPM), or other system such as Gentoo's Portage.
Uploading to a distribution: Unlike with Windows, we don't expect users to go to random websites and download executable files. We instead expect distributions to have large archives of pre-packaged software. As such, we need to get the application uploaded to the archive.
Unfortunately, both of these steps have traditionally been quite complicated. The former has involved learning the relevant packaging systems, which in themselves can be fairly complex even for a basic desktop application. Part of the challenge with packaging has been that there are often many different ways to package an application, and the skills required to package your new app are often outside of the scope and interest of application developers.
Fortunately, Quickly eases this significantly. With a single command you can generate a Debian/Ubuntu package that's fully compatible and pulls in all required dependencies (much of this was made possible due to the excellent Martin Pitt).
In addition to this, Quickly includes a 'release' command that will automatically produce an Ubuntu package and upload it to your Launchpad Personal Package Archive, all in one command. This effectively makes it a one-command operation to publish new versions of your software, and saves you oodles of reading about packaging when you would prefer to be hacking on your app.
QUICKLY: A sample Quickly app just after it has been generated.:
The latter of the two steps above, uploading to the distribution, is the more complex element. All Linux distros have teams of developers who have worked hard to build trust and technical competence to be approved as an official developer; that is, having direct upload rights to add packages to the archives and future releases of a given distribution.
Gaining these upload privileges often requires significant skills, and skills that are traditionally designed for assessing operating system integrators. As an example, in Ubuntu, there are two broad types of contribution:
Core Dev: This is for developers who want to upload to the 'main' archive, which includes all of the officially supported software (such as the software in the official release ISO images and CDs).
MOTU: This is for those developers who want to work on the non-supported Universe archive, which includes thousands of applications imported from Debian.
Becoming a Core Dev requires significant generalist Ubuntu and packaging knowledge, and it also requires comprehensive technical competence to become a MOTU. With both there is an assumption that developers will be working on multiple packages, and these developerassessment processes rightly require a high level of quality.
The challenge with these current processes is that for app developers they are a little heavy. To help resolve this in the Ubuntu 10.10 cycle a new process called the Application Review Board was introduced, in which application authors can submit an application for technical assessment by a community board.
If the application meets a set of technical assessments around code and packaging quality, the application is approved and made available in the Ubuntu Software Centre. Details about the process can be found at http://wiki.ubuntu.com/AppReviews.
Wrapping up
In the last few years we have seen ever more competition in attracting application developers to different platforms.
While Apple and Google have done an excellent job with their respective platforms, there is a huge opportunity to make Linux a top-tier application platform, and this article outlines some of the work going on in the Ubuntu world to help encourage and motivate application developers and make their lives as easy as possible.
This article has not had the space to cover the many innovations happening inside the Gnome and KDE camps, other distributions or the wide variety of upstream projects that are seeking to make development easier. Fortunately, it seems that many in the open source community are passionate about enabling more people to contribute to free software, and if we keep stepping back and making our different tools, processes and systems easier to use, we can hope to see a wealth of additional applications available across different Linux distributions.
Read More ...
Tutorial: Ride the airwaves without a radio
This might suggest that all you'll ever need is an Internet radio portal, such as www.reciva.com, but that would be ignoring one important fact – the airwaves contain much more than just broadcast stations on FM.
In the past, if you wanted to discover the more unusual signals on the radio bands, you'd have needed to buy a shortwave radio and a huge aerial, or share an online radio in 30-second windows. Now there's an alternative: web-based software defined radios, or SDRs.
Online receivers
Modern but conventional communications receivers have digital inputs, so they can be controlled from a PC. This led the way for the first generation of online receivers, which enabled remote users to select a frequency via a web interface and listen to a stream of audio representing the station on that frequency.
The snag here was that an ordinary receiver could only be tuned to one frequency at a time, meaning that these web radios were essentially single-user affairs. An SDR (or software-defined radio) is a radio receiver that uses software to achieve most of what a conventional receiver does using electronic circuits.
If you were to buy an SDR as a black box, the processing would be carried out on the receiver's internal processor, but it's also possible to do that processing on a PC.
Web-based SDRs
Although you can buy the hardware necessary to turn your PC into an SDR, our theme here is how to listen to the airwaves without a receiver instead. Web-based SDRs have sprung up around the world, offering a second generation of online receivers.
The important point is that there's no concept of the hardware being tuned to a particular frequency, because it's the software that's responsible for selecting individual stations from a wide band collected by a high-frequency sound card. The upshot of this is that these online receivers can be used by lots of people at once, each listening to a different frequency.
To get a feel for a web-based SDR, we're going to use the one at http://websdr.ewi.utwente.nl:8901. This is located at the University of Twente in the Netherlands and offers a good selection of bands. If you get bored and want to listen to the radio bands from a different part of the globe, you can find links to plenty of other Web SDRs at www.websdr.org.
WEBSDR: www.websdr.org provides links to online SDRs worldwide. You'll need to enable JavaScript in your browser and have Java installed to access them.
All these use the same software, which requires that you have JavaScript enabled on your browser and that you also have Java installed – you can obtain the most recent version from www.java.com. Some appear to work in just about any browser, but others are fussier. We suggest you use the latest version of Internet Explorer or Firefox.
With the Twente SDR onscreen, you'll notice that it shows a waterfall display for each of the bands it covers. This provides a graphical view of a band, changing with time, so you can see where the signals are. This SDR covers several shortwave amateur bands, plus part of the medium wave broadcast bands and the bottom end of the radio spectrum to 165kHz.
We'll begin by listening to something familiar – a medium wave broadcast station. Select 'AM' from the bandwidth controls below the waterfall displays. This stands for amplitude modulation, and is the type of modulation used by broadcast stations in the LW, MW and SW bands.
Now, in the medium wave band – that's the third waterfall display down, which covers 462-563kHz – and you'll probably see that the most prominent station is centred on 459kHz. Click in the black area below the waterfall display to select this frequency, and you should hear a German radio station. This is Deutschlandfunk, which is located at Nordkirchen in Northrhine-Westphalia and is the loudest station in this frequency range from this part of Holland.
Amateur broadcasts
Now we'll try something you're probably less familiar with – we're going to listen in on one of the radio bands used by amateur radio enthusiasts worldwide. This web SDR covers the 160m, 80m, 40m, 30m, 20m and 15m bands, and what you hear depends on atmospheric conditions and the time of day.
Generally, the 160m (1,800-1,892kHz), 80m (3,500- 38,00kHz) and 40m (7,000- 72,00kHz) bands will be the most consistent. If you do manage to hear stations on the other bands, they will often be more distant.
Whichever band you choose, pick stations in the upper half (higher frequency) portion. If you're listening on the 160m, 80m or 40m bands you should choose LSB (lower sideband) as the modulation mode in the bandwidth controls, and if you're in the other bands you should select USB (upper sideband).
These occupy a narrower band of frequencies than AM, so the tuning is much more critical and until you get it spot-on, voices will have a characteristic Donald Duck sound.
The yellow graphic in the black area below each waterfall display shows both the nominal frequency and the selected bandwidth, and you can alter either of these by clicking on and dragging the respective lines. Dragging the nominal frequency – the vertical yellow line – is a good way of fine tuning. Practice makes perfect, and you should soon be listening in on amateur radio contacts.
Although both stations normally transmit on the same frequency, you may only hear one side of the conversation if the other station isn't audible from Twente. If you look at the waterfall display for an amateur band, you'll notice that the stations in the upperfrequency portion tend to be about 3kHz wide (these are voice stations), but those at the bottom end of a band are much narrower.
Try clicking on one of the narrow signals and you'll probably find that it's Morse code. If you still have LSB or USB selected then you might hear more than one signal at once, so select 'CWwide' or 'CW-narrow' from the bandwidth controls (CW stands for 'continuous wave'). This will reduce the receiver bandwidth so that you hear fewer stations.
Deciphering Morse
You should now be listening to an amateur Morse code signal, but it will probably be totally meaningless to you. Fortunately, software is available to decode Morse code, and here we'll see how to use it to decode signals from a web-based SDR.
This is a rather hit and miss process, since there's no way of knowing what you'll find on the bands at any one time. The software we'll be using for decoding Morse is called MultiPSK. We'll provide brief details on how to use it here but you might like to refer to our article in PC Plus 279, where we used it with a real (as opposed to SDR) radio receiver.
MULTIPSK: MultiPSK can decode Morse code and a lot more, including radio teletype and slow scan television, which are used to transmit words and images.
You'll need to route the audio output from the web SDR (from your web browser) into MultiPSK – see 'Loop back your audio' above for details of how to do that. Now start MultiPSK and select 'RX/TX screen'. You'll notice that it has a waterfall display, like the ones on the online SDR, but rather than covering a whole band, it covers a much narrower band of frequencies as selected by the SDR.
This is a Morse signal, so click on the 'CW' button (yellow) from the block of controls at the top right. Also click on the most obvious signal in the waterfall display to select the station you're listening to more accurately.
With a bit of luck, the text display at the bottom will show something like the English language. Take a look at 'Spotlight on… Understanding amateur Morse' first, because the dialogue will be almost unintelligible to the uninitiated.
SDR on your PC
You can also run SDR software on your own PC. To do this live, you'd need some external hardware to amplify the signal and do some preliminary processing but you can use SDR software to extract signals from a recording of a broad swathe of the radio spectrum.
If you struggled to interpret live Morse signals from an online SDR, this is a sure fire way to see MultiPSK in action. The SDR software we're using is called Winrad. You'll also need a recording of a section of an amateur Morse code signal. Although this is a WAV file, it'll sound odd if you listen to it normally because it's a broadband recording.
WINRAD: Turn your own PC into an SDR using Winrad. You'll need extra hardware if you want to do this live, but you can extract signals from a recording using software alone. You'll find an example at www.dk3qn.com/wfSDRwav
Start Winrad and open the file from 'Show Options | Select Input | WAV file', before clicking on the 'Start' icon (the right-pointing triangle towards the right of the screen). You'll see the now familiar waterfall display, but because Winrad is using your soundcard rather than dedicated hardware, the frequency range is smaller than with the online SDRs.
You'll soon find your way around Winrad's interface. Virtually all the signals in this part of band are Morse, which will give you plenty of opportunity to use MultiPSK.
Read More ...
Tutorial: How to isolate colour in photos
The best reason to create an image with a strong isolated colour – or a bold area of isolated colours – is to add impact to an element that otherwise gets lost in the composition. It can add poignant impact to a small but crucial detail, and can be a way of tugging on people's heartstrings.
Examples in photography abound – visit any tourist art market and you're bound to see a few. London telephone boxes and double-decker buses are typical examples. The fact that the technique is so popular tells you all you need to know – it gets the message across, and can turn a dull photograph onto one with an instantly obvious focal point.
Choosing your shot
The popularity of colour isolation means you need to be careful using it. You're unlikely, for instance, to find many coffee table books stuffed with colour-isolated images. As with any technique designed to have an emotional impact, overdoing it will result in images that are exhausting and repetitive to browse through.
Your best bet is to pick one photo on which the effect works well and use that, rather than applying the technique to everything you shoot.
While isolating a colour can save a dull photo, it's good practice to use shots that you're already happy with. Tight zooms don't work well, but choose an image with a strong, clear and brightlycoloured main subject. Bold red generally works well, hence the popularity of buses and telephone boxes.
Composition isn't so important when choosing a shot – you're forcing the viewer to notice what you want them to by rendering an object in colour, so this is a good way to use an image that's sharp and well exposed, but slightly off compositionally.
What you'll need
As well as your image, you'll need something to edit it with. The techniques used here can be translated easily to GIMP (available to download free from www.gimp.org), but most users will have more success with Photoshop Elements (£78 from www.adobe.co.uk). Elements is not only a more comprehensive application, offering a capable library as well as an editor, but some of its tools - such as quick selection - are more refined and quicker to use.
Applications that don't allow proper 'per-pixel' editing, such as Adobe's Lightroom or the free Picasa, aren't good for this kind of work. You might find that your camera has an automatic colour isolation setting, removing the need to edit your images in post-production at all and leaving you with roughly the same effect.
Getting going
Before you begin the walkthrough, make sure your image is otherwise finished. That means that any sharpening, cropping or tone curve adjustments should be completed before you start. Attempting to finish a photo that's already had major work done on it will result in loss of detail, particularly if you're working on a JPG image rather than a Photoshop PSD file.
When editing your image initially, remember that you're aiming for an over-the-top effect. Feel free to overdo the saturation, paying particular attention to the area that you intend to remain in colour – we're going to make sure there's plenty of contrast in the background on the final image, so make sure the colour stands out.
With this technique, representing reality accurately is secondary to achieving maximum impact. It's also a good idea to work on a copy of your image in case you save an imperfect version accidentally. Open your prepared image and save it as a work in progress file. If you're using Photoshop, saving the image as a PSD file is generally a good idea.
PSD files have a few strengths for this kind of work – crucially, they don't degrade in quality each time you save them, as JPGs images do. They also support layers, which means that once your image is finished, you don't need to flatten it and lose the ability to make wholesale changes later.
The only time you should save your work as a JPG is when it's totally finished and ready for print or uploading to an online photo album.
How it works
This technique works because of the support many advanced editing applications have for layers. A layer is a simple concept – it's effectively another image that fits exactly on top of your first image, all within the original file.
Each layer can have elements added to it such as text, and in Photoshop it can work as an adjustment layer, filtering the image beneath to give it more saturation, for example, without editing the original pixels.
In this example, we use layers very simply. The topmost layer of the image is a black and white version of the original photograph, with areas carefully removed to reveal the colour version beneath. If you have a steady hand (particularly in conjunction with a graphics tablet), you may find that you're able to simply erase sections of the topmost layer on some images by hand. In most cases, however, it will make sense to select an area precisely first and use your selection as a guide.
Isolating colours with Photoshop Elements
1. Prepare layers:
Open your image (ideally from a copied file so you have a backup), and pay attention to the Layers palette. If you can't see it, click 'Window | Layers'. You'll see a thumbnail – right-click it, choose 'Duplicate layer', and click 'OK'.
Nothing will happen to your image, but you now have two layers – one on top of the other. Making one layer black and white and exposing parts of the layer below will produce our effect.
2. Convert to mono:
Go to 'Enhance' on the menu bar and click 'Convert to black and white', or use [Ctrl]+[Alt]+[B] on your keyboard. A few styles are listed for you to click through, with the results previewed both in the conversion window and in the main image window.
Our advice is to go for something with a medium amount of contrast – not too bright or too dark. Click 'OK' when you're happy with your choice.
3. Zoom in:
This technique works by selecting an object in your photo, then removing it from the topmost layer, allowing the coloured layer to show through. It's always best to work very carefully – mistakes might not be obvious when reviewing your images on screen, but they will be once printed.
Click the magnifying glass in the toolbar or press [Z] , then click and drag around the object you want to select to zoom in on it.
4. Select object:
Click the Quick Selection tool or press [A]. This tool works by selecting adjacent areas of your image which are the same colour or texture. Click and drag the mouse pointer over the part of your image you want to colourise, and don't worry if the selector makes the odd mistake. The edges will be refined when you let go of the mouse button, and the next step demonstrates how to refine your selection.
5. Fine-tune selection:
It's very important that you don't tolerate a less-than-perfect selection, because it will have a negative impact on your final image. If the selection tool has chosen inappropriate parts of your image, press [Alt] and click and drag the mouse pointer over them. This will remove them from your selection.
Similarly, if you remove part of your selection that you wanted to keep, simply click and drag back over it.
6. Delete selection:
Tap [Delete], and the area you've selected will be removed, allowing the layer beneath, which is still in colour, to show through. Zoom to 100 per cent and make sure the edges look bold and confident If you find an area that needs editing, the Eraser tool (press [E]) is a good way of carefully removing stray black and white elements.
Once you're happy, save the file with layers as a PSD document.
Read More ...
In Depth: 12 essential system recovery tools
When it's time to go, it's time to go, and it usually happens at a bad time. There's no point fretting over the loss, though. Instead, use the plethora of tools out there to minimise the damage.
Have you accidentally deleted your anniversary photos? Installed a new OS that's botched the partition table? Can't read data from an old CD? Don't panic. We'll point you to the free tools that'll help you get out of a tight spot.
Install a Linux distro – Ubuntu is a perennial favourite – then use its package manager to install the following programs. Search for the program name exactly as written to install it.
1. Photorec - recover lost files from all kinds of corrupted media
You don't have to try too hard to wipe data from your had drive. A misplaced space in the 'rm' command will do the trick. At least graphical environments are a little more forgiving, letting you restore files you've trashed accidentally. But what about the holiday photos that were stored on the CF card you just flashed?
PHOTOREC: With the size of modern hard disks, don't be surprised if Photorec finds a file you deleted weeks ago. It can find files in over 300 popular formats.
That's where PhotoRec comes in handy. It ignores the filesystem and goes directly after deleted files on hard disks, optical discs, USB drives, memory cards and even portable music players such as iPods. It reads blocks of data in FAT, NTFS, EXT2/3 and HFS+ partitions, and looks for deleted files in over 300 common formats, including ZIP, HTML, PDF and JPG to name a few.
2. e2fscktools - check and correct filesystem inconsistencies
Hard disks do a lot of work. Modern OSes perform so many read and write operations, it isn't surprising that filesystems inevitably develop inconsistencies here and there over time.
This is why all mainstream Linux distros bundle the e2fscktools package, which includes tools that check and modify EXT2, EXT3 and EXT4 filesystems. For other filesystems, you can use xfs_ repair, jfs_fsck, and fsck.resiserfs. Most modern distros typically invoke the file system check after a particular number of reboots. If the check fails, it's probably because it can't locate the file system metadata.
E2FSCK: You can use e2fsck to mark bad blocks in a disk, so they aren't used for storing data. This is helpful if your hard disk is starting to throw up errors.
In that case, use the dumpe2fsck utility to locate the backup superblock and point to it via e2fsck. When e2fsck encounters problematic data, it places it in the 'lost+found' directory, along with the inode number that the data is associated with. If there's a great deal of data corruption on your hard disk and you have lots of files in the 'lost+found' directory, it's best to restore your data from a backup.
3. ntfsresize - resize NTFS partitions
Like the e2fsck tool, ntfsresize is included with most mainstream Linux desktop distributions. It helps you resize a Windows partition on a 32-bit or 64-bit installation without defragmenting the disk first. This tool checks NTFS partitions for errors and comes in handy when you want to expand and shrink the filesystem.
It's especially useful when you're working with partitions that Windows refuses to recognise because of bad sectors. The ntfsresize tool may alter the Windows boot-up, depending on how it's used. For example, it schedules an NTFS consistency check after the first boot into Windows. If you've experimented with the size of the partition, Windows might also throw up a system settings change message.
4. FSArchiver - duplicate entire partitions
The only effective answer to a damaged disk is a backup. There's no shortage of backup utilities out there, but they aren't all as smart as FSArchiver. With FSArchiver, you can save the contents of your filesystem into a compressed archive. This saves you space, and the backup is easily mountable in read-write format.
You can also restore backups into smaller or larger partitions. To ensure the integrity of the backed up data, FSArchiver checksums individual files, which it verifies during restoration. FSArchiver's main advantage over traditional archiving tools is that even if one file in the backup becomes corrupted, the tool will only skip over the specific file that's gone bad and still restore the rest of the backup as normal.
5. chntpw - reset Windows passwords
Password protecting your accounts is a good way to keep them secure, but all hell breaks loose when you forget your own password. Most web services have a backup plan – a way for you to retrieve your forgotten password, either by emailing you a new one or verifying your identity with a secret question.
Unfortunately, Windows has neither. There's little you can do if you forget your Windows password. Or is there? The chntpw tool can be used to reset passwords on Windows installations. It works by reading the Security Account Manager database under the Windows registry. Just boot from the live CD, point it to your Windows installation and breathe a sigh of relief as it prints a list of all the users on the installation.
Reset the password for the admin user – you can ignore the rest. If you want to recover your password instead of setting a new one, use Ophcrack.
6. Sfdisk - back up partition tables
If you juggle multiple OSes, it can be easy to mess up the partition table. Tools such as GParted mean that creating and resizing partitions isn't much of a chore any more, but they aren't very helpful when you've got a misaligned partition table.
That's when you need sfdisk. It's a small command line utility that's included with every Linux distro, and will back up, edit and restore partition tables. You use a considerable amount of disk space backing up data, so it only makes sense to back up the few bytes taken up by the partition table.
SFDISK: If you juggle multiple operating systems, it's wise to use the sfdisk utility to back up the partition table in an easy to read (and modify) text file.
It'll go a long way in recovering from a botched OS install. You can back up your partition table with sfdisk -d /dev/sda > sda_table.txt and restore it with sfdisk /dev/sda < sda_table.txt. If you have a RAID setup, you can mirror a partition table from one disk to another using sfdisk -d / dev/sda | sfdisk /dev/sdb.
7. ddrescue - recreate a damaged disk
We've looked at tools that will help you check and correct a damaged partition, but what if a disk throws up read errors? This isn't unusual for older hard disks and optical drives. If you have such a disk, start by making a copy of the failing drive with ddrescue, then try to repair the copy. If your data is really important, use the copy as a master for a second copy and try to repair the second copy.
DDRECSUE: You can use ddrescue to recreate a damaged hard disk, but don't forget that it overwrites data on the partition you're copying to by default.
The basic operation of ddrescue is fully automatic – it tries to recreate the data on a damaged disk. Better still, if you run it on two or more damaged copies of a failed disk, you might end up with a complete and error-free version. The tool uses a logfile to speed up the process by only reading the missing blocks.
8. Rsync - back up remotely
Keeping local backups isn't a clever move unless you mirror disks. What you need is a utility that backs up data over a network with very little overhead, and nothing does that better than the rsync CLI utility.
RSYNC: Use Grsync to simulate a backup and assess errors that may occur.
When run for the first time, the rsync command may seem a bit sluggish. However, all it needs to do from then on is transfer the bits that have changed in each directory or file since the last run. Since it's CLI, you can schedule it to do unattended remote backups. If CLI isn't you thing, try the various GUI avatars, such as Grsync, which runs on Linux and Windows. If you need something enterprise-ready, try BackupPC.
9. GAG - advanced boot loader
MBRs are easily damaged if you're careless while installing multiple OSes, or if you clone a bootable partition. If you've cooked yours, it's a good excuse to switch to the GAG boot manager.
GAG can boot nine different OSes installed in the primary or logical partitions of the disk. It's easy to configure and supports all the features you'd expect from a boot manager, including a timer to boot into the default OS, and password protecting the configuration menu.
GAG: Although it's more graphical than other bootloaders, GAG's interface is still entirely keyboard driven.
You can install GAG from Windows, Linux, or from one of the rescue-centric live CDs. When using GAG, install the Linux boot loader (GRUB) in the superblock of the root partition (such as '/dev/sda6'), not in the MBR.
10. Inquisitor - stress test hardware
Why wait for hardware to fail? It's a good idea to test your system thoroughly from time to time to make sure it can handle the stress it's put under. The Inquisitor live CD has lots of modules to test the various components in your system, such as hard disks, the disk controller, optical disks, USB drives, CPU, memory and more.
The live CD also comes in handy to stress test an overclocked configuration. You can use Inquisitor to benchmark your computer, which is useful when comparing the performance of different configurations. There's also the Phronix Test Suite, which can be used for benchmarking your system and comparing it with configurations uploaded by other users.
11. chkrootkit - check for rootkits
Computer viruses are the least of a power user's worries. An intruder can wreak much more damage than a virus by masking their intrusion with a rootkit, but help is just a scan away. Using chkrootkit, you can check your installation for many known rootkits.
The program uses tools such as grep to check if '/proc' entries are hidden from ps and the readdir system call. It performs a battery of tests to find signs of over 60 rootkits. Although it's a CLI utility, you shouldn't schedule it to run unattended. To be doubly sure that you're running a clean ship, also try the rkhunter utility.
12. md5deep - search for lost files
You'll probably need to recover lost files at some point, but how do you ensure their integrity? If your system has been compromised, the attacker might have replaced the original files with malicious copies.
That's why you should keep a hash digest of all the files on your system. Depending on the density of your filesystem, this could be a complex task. That's unless you use md5deep. It recursively computes MD5 of all files inside a directory. Moreover, it can use those hashes to find lost files and then verify their integrity. Binaries for the tool are available for both Linux and Windows.
Read More ...
Tutorial: How to protect your website from hackers
This information helps hackers learn the hardware and software structure of the site, its capabilities, back-end systems and, ultimately, its vulnerabilities. It can be eye-opening to discover the detail a hacker can see about your website and its systems.
The way the internet works means that nothing can ever be entirely invisible if it's also to be publicly accessible, and anything that's publicly accessible can never be truly secure without serious investment, but there's still plenty you can do.
Now we're going to examine some of the steps you can take to ensure that any hacker worth their salt will realise early on that your web presence isn't the soft target they assumed it was, and to get them to move on.
Robot removal
Many developers leave unintentional clues as to the structure of their websites on the server itself. This tells the hacker a lot about their proficiency in web programming, and will pique their curiosity.
Many people dump files to their web server's public directory structure and simply add the offending files and directories to the site's 'robots.txt' file.
This file tells the indexing software associated with search engines which files and directories to ignore, and thereby leave out of their databases. However, by its nature this file must be globally readable, and that includes by hackers.
Not all search engines obey the 'robots.txt' file, either. If they can see a file, they index it, regardless of the owner's wishes.
GIVEN UP BY GOOGLE: 'Robots.txt' files are remarkably easy to find using a Google query
To prevent information about private files falling into the wrong hands, if there's no good reason for a file or directory being on the server, it shouldn't be there in the first place.
Remove it from the server and from the 'robots.txt' file. Never have anything on your server that you're not happy to leave open to public scrutiny.
Leave false clues
However, 'robots.txt' will also give hackers pause for thought if you use it to apparently expose a few fake directories and tip them off about security systems that don't exist.
Adding an entry for an intrusion detection system, such as 'snort_data' for example, will tell a false story about your site's security capabilities. Other directory names will send hackers on a wild goose chase looking for software that isn't installed.
If your website requires users to log into accounts, ensure that they confirm their registrations by replying to an email sent to a nominated email account.
The most effective way of preventing a brute force attack against these accounts is to enforce a policy of 'three strikes and you're out' when logging in. If a user enters an incorrect password three times, they must request a new password (or a reminder of their current one), which will be sent to the email account they used to confirm their membership.
If a three strikes policy is too draconian for your tastes, or you feel that it may lead to denial of service attacks against individual users by others deliberately trying to access their accounts using three bad passwords, then it's a good idea to slow things down by not sending the user immediately back to the login page.
After a certain number of failed attempts, you could sample the time and not allow another login attempt until a certain number of minutes have passed. This will make a brute force attack very slow, if not practically impossible to mount.
Interacting with your website like a normal user will provide a hacker with a huge amount of free information about the way the site works. They will spend a long time reading the code loaded into their browser. The browser and the code (including HTML) served as part of each page is what's known as the client side of things.
For example, one common technique used to keep track of user data is to send information about the user's session (their username and so on) to the browser and expect it to be sent back. In other words, the site has the browser keep track of which user is interacting by having it announce their credentials each time it submits any information.
In times past, these credentials might have contained a whole shopping cart, meaning people could simply edit the values of cart items before pressing the checkout button, thereby managing to purchase items at rock bottom prices without the site owner realising anything was wrong.
This led to the upsurge in remote shopping carts, where the only information handled by the browser is an encrypted cookie, which is passed to a remote payment handling system such as Google Checkout or PayPal.
Perhaps worse is the use of obviously named, unencrypted variables in the URL, which are passed to a server-side script to tell it which user is interacting with it. Without appropriate checks, this can lead to serious vulnerabilities.
When I was a network security consultant, one assignment was to assess the internal security of a company's network. I found unencrypted usernames and passwords going by on the network and headed for an internal time management system with a web interface.
After using these to log in, I was dismayed to discover that the user's account number on the system was part of the URL. What happened if I incremented the account number by one? I got full read/write access to someone else's data.
Sometimes, however, variables in URLs can be exploited in benign, useful ways.
For instance, when searching for messages in a forum, you might be presented with a large number of pages and no quick way of going directly to one in the middle of the range. The URL might contain the page number or even the result number that begins the current page. Try modifying this and pressing [Enter] to see if you're taken to the page you want to access.
There are also plenty of other pieces of information that a site might expect to receive from the browser verbatim, which can be manipulated or simply read for the useful information they contain.
Many of these pieces of information are contained within hidden fields. All the hacker needs to do is edit the page's source code locally, re-read it into a browser and click the appropriate link to send it back to the server.
ON SHOW: Hidden variables embedded within a web page. What might these variables do, and what would happen if one was changed?
Consider a field called 'Tries'. As part of on a login page, there's a good chance that this contains the number of login attempts the user has made. Resetting it to '1', '0', or something like '-1000' could provide the hacker with a way of bypassing a three strikes login attempt rule if the server only checks that the variable has a value above three.
Fields that hold usernames and passwords are meat and drink to keylogging and other snooping software.
Input box names
Another vulnerability involved in having the client side keep track of the user's session is a web page that uses the same names for any input boxes each time.
While it may be convenient for the site's users, who can use autocomplete for web input forms and select from previous input box values, if they wander away from their computer without locking the screen, anyone can select from these lists.
If the browser also fills in passwords, an interloper can access pretty much any site where the user has an account. Banks have started randomising the names of input boxes to prevent this problem, but most privately owned commercial websites don't.
Never ask client-side code to keep track of a user's session using unencrypted data. Instead, use an encrypted session cookie to store a session ID, and keep track of the session in a back-end database.
LIMITED INPUT: Decide which inputs you will allow in an input field rather than trying toguess everything that a user may enter – deliberately or accidentally
Cross-site scripting vulnerabilities (or XSS for short) are a class of bugs that hint at how much ingenuity there is in the online security community. XSS vulnerabilities can allow malicious hackers to inject code into served web pages that in turn can steal server-side information.
An XSS attack takes the form of a malicious link to a third-party site embedded in a hyperlink. It might be sent in spam or embedded in a site itself.
This is possible because hyperlinks can contain parameters designed to pass information to the back-end server, such as the current session cookie.
It's possible to supply the value for a variable using the
No comments:
Post a Comment