Friday, December 24, 2010

Credit card companies can kill spam

Wikileaks has polarised many people, especially with its latest round of revelations.

There are those who believe the site and its release of confidential documents is evil, potentially exposing innocent people to great risk and embarrassing governments all over the world.

There are others who believe it produces a long-needed level of transparency within government that none in power are prepared to deliver on a voluntary basis.

No doubt, the argument will continue for some time to come -- but there is one positive aspect to come from the latest chapter of this saga.

When PayPal and various credit card companies withdrew their services in handling payments and donations to Wikileaks, the set an important precedent. They have shown that they are not just a "carrier" of funds, they can also be a censor.

Why is this important?

Well for too long, our email boxes have been laden with spam promoting all kinds of fake and dubious products. Pitching everything from herbal weight-loss tablets to creams and potions guaranteed to increase the size of your manly bits, these emails have been the bane of our lives for too long.

A great deal of time and effort has been invested in creating clever software that automatically filters-out the spam that would otherwise clog our inboxes and each year, huge sums of money are wasted because of the need to provision bandwidth that is simply wasted delivering that spam to unwilling recipients.

Well perhaps now Visa, Mastercard and PayPal can come to our rescue.

By actively moving to halt the flow of funds to Wikileaks, these companies have shown that they have the ability and willingness to step in and try to intervene in activities that *may* be illegal -- even if that's only a suspicion and not proven.

So right now, I'm expecting them to step up to the plate and immediately stop providing merchant services to all these shady ventures and products that are promoted by way of spam.

If it's good enough to deny their services to Wikileaks then it must also be good enough to cut off all those spammers and others who regularly breach anti-spam laws. After all, there's no point in promoting a product by way of spam if you immediately lose the ability to accept payments; is there?

Visa, Mastercard and PayPal claim that the actions they took over Wikileaks were not politically motivated but simply part of their obligation not to provide services to those who engage in criminal activities. Well perhaps they ought to bone up on their lawbooks and take note of the fact that just about all Western nations have now banned spam by way of legislation.

So, here's to a 2011 in which we should be able to rely on the banks, PayPal and credit card companies to really "deal to" spam, or explain their apparent hypocrisy to the rest of us.

Merry Christmas all, and a Happy New Year.

Friday, December 17, 2010

The Net, better than a bucket for catching leaks

It seems that the internet has become synonymous with leaked information this week.

If it's not a new chapter in the ongoing Wikileaks saga and the fate of its founder Julian Assange, it's a leaked screen shot from Yahoo which indicates where it'll be cutting costs and services or a snafu over at Facebook which prematurely exposed new features that nobody was supposed to know about.

And that's the problem with the internet and modern digital technology -- it's so damned fast and ubiquitous that a single slip of the fingers or mistaken click of the mouse can instantly expose your secrets to all the world. What's more, it's becoming increasingly difficult to protect your valuable digital data from being stolen or distributed without permission.

Take the Wikileaks situation for example...

There are hundreds of thousands of documents involved in the latest tranche of information being published from the secret files of the US government. Can you imagine how hard it would have been to smuggle that much data out of a government department before the advent of digital technology?

One hundred thousand A4 pages of typed material alone is a stack some 10 metres tall -- not something you can smuggle out past a security guard by slipping it into your sock.

However, with microSD cards now capable of storing 8GB of data, a single fingernail sized fragment of plastic encapsulated silicon can carry nearly quarter of a million typed pages worth of "secrets" and be easily secreted about one's person without raising suspicion. It's now quite practical to swallow a company or government's secrets along with your lunch and recover them a day or two later from the convenience of your "convenience".

A quick look on any of the popular Chinese online retail websites also shows that something as useful as a USB drive can be had in many different forms, many of which you'd never dream were actually concealing a huge chunk of digital flash memory.

But not only does this technology make it easy to actually smuggle information out of the places where it might be found but it also simplifies the process of duplication and dissemination.

Photocopying 200,000 typed documents would take an age -- copying the contents of a flash drive or memory card takes just a few minutes at most. Send those files via the internet and they can be half way around the world in just a few more minutes.

Just like privacy, it would appear that the concept of secrecy is now a dying concept -- something that has been rendered near-impossible by technology.

Even if you think your secrets are safe in the cloud -- think again. As the recent database break-in at Gawker Media shows, even online security is a myth.

It is true, we do live in interesting times.

Saturday, December 11, 2010

The bomb in your pocket

Lithium batteries are a cornerstone of modern portable technology.

They allow an incredible amount of energy to be stored in a very small space and thus enable our mobile phones, laptops, tablets, media-players and other devices to run for reasonable periods of time between recharges.

However, there is a problem with concentrating all that energy into such a tiny package.

Sometimes, they go bang!

The average mobile phone battery, for instance, may have a capacity of around 1 amp/hour at 3.7V which is a total of 3.7 watt-hours (13 KiloJoules for the physicists amongst us), or about
the same as a rifle bullet in flight.

When released over several hours, most of that energy is put to good use, running processors, illuminating LEDs and LCD backlights, even sending and receiving data via radio-frequencies.

Release the same amount of energy in just a fraction of a second though -- and "boom!" -- you have a problem.

Most of the time, these lithium batteries work as intended and simply release their energy in a trickle, as demanded by whatever appliance or device they're installed in. On occasion however, something goes wrong and the result is a fire or explosion.

But just what is it that makes a lithium battery so prone to the kind of unexpected and sudden energy release we're talking about?

Well this article gives an insight into what goes on when these batteries are charged. As you can see, massive physical changes take place within the tiny conductors that make up the plates of the batteries and, over repeated charge/discharge cycles, this can cause physical damage to the cells -- eventually resulting in catastrophic failure.

So what is being done to try and make lithium batteries safer?

Well there have been alternative chemistries developed such as lithium iron phosphate (LiFePO4). Batteries based on this chemistry have proven to be far safer and almost immune to the kind of fiery venting that the more common lithium-ion and lithium-polymer cells sometimes produce. Unfortunately, LiFePO4 are more expensive and don't have the same energy-density hence, are not favoured by gadget-makers.

In fact, it looks as if lithium batteries may get even more dangerous before they get safer because their energy density looks set to see a ten-fold increase, if this article is anything to go by.

Personally, I find it amusing that in this era of paranoia, when passengers are frisked before they're allowed on an aircraft and at one time it was even forbidden to take more than a few ml of any liquid onboard for fear it may be part of an incendiary device -- nobody cares that most of us are carrying a rather potent bomb everywhere we go -- in the form of our mobile phones, netbooks, ipads, ipods and other lithium-powered devices.

But fear-not, airlines are aware of the danger and train their staff to respond accordingly -- as in this FAA video.

I never had these problems with my old zinc-carbon Eveready AA cells. Ain't progress a wonderful thing?

Friday, December 3, 2010

IBM to be arrested for breaking Moore's Law?

Moore's law has been one of the most enduring truisms of the computer era.

For those who aren't familiar with Moore's Law (and there must be at least one person in the world who's been sleeping for the past 45 years), Gordon Moore of Intel once claimed that the number of transistors that could be scaled onto a single chip would double ever 12 months.

And you know what? He was dead right.

The effect of this non-linear growth in the density of integrated circuits is that computing power has grown roughly in unison to this ever-increasing complexity of the chips that comprise our modern CPUs.

However, physics looked as if it was catching up to Moore's law and might even render it invalid.

The problem is that in order to increase the number of transistors and provide faster computer chips, the size of each individual transistor and the paths that connect them has been shrinking.

Eventually, we must reach a point where we just can't make things any smaller, for a number of reasons.

Once things get too small, the weired world of quantum effects start to replace those of classic science. Instead of flowing smoothly from junction to junction, as they do on larger scales, electrons start falling prey to those quantum effects, with certainty replaced by probability.

Those who develop our computer chips have been warning for some time that we're rapidly approaching the threshold of miniaturization that will effectively create a wall to the seemingly endless rule of Moore's law.

However, when faced with a wall that blocks your path, what do you do?

The smart money says that instead of banging your head against it and drawing blood, you're much better to set off in a new direction, one where the wall won't be a problem.

And indeed, that's exactly what some researchers are doing. What's more, these scientists claim that when they succeed, Moore's law will be eclipsed by new laws that dictate a massive leap in computing power hitherto unimagined by Gordon Moore or any of his fellow workers at Intel.

Researchers at IBM now claim that the future of computing is something they call "nanophotonics" (hooray, a new buzzword!).

The increase in raw computing power that this blend of conventional silicon technology and optical computing promises to deliver is mind-boggling.

IBM says that its nanophotonic processors will deliver speeds measured in exaflops (10^18 floating-point operations per second) rather than the current record-holding super-computer performance of a meagre 2.67 petaflops (2.762 * 10^15 floating point instructions per second).

The first of IBM's new computers using this technology is due to be installed next year and, although it will deliver a humble 10 petaflops, it will do so using far less space and power than a conventional computer this size would require.

So the future of super-computing is bright -- quite literally.

And Moore's law?

Well that may become a quaint relic of the 20th century -- itself eclipsed and replaced by a new law that defines much faster growth over shorter timeframes.

I love this industry.

Friday, November 26, 2010

iPad, where the smart money is?

In his bid to try and overcome the public's perception that internet=free, Rupert Murdoch has begun erecting paywalls around his various online publications with, according to reports, only mediocre success.

The reality is that I suspect even Murdoch himself doesn't honestly believe that trying to charge people for content they've been getting for free for so long will succeed.

That's why he's pouring $30m into creating content for the nascent iPad publishing market where he hopes that a change of medium will also mean a change of heart from penny-pinching readers.

This time he's probably a lot closer to the money, since I'd wager that those who invest in an iPad probably have a higher level of disposable income as a group, than do the great unwashed masses who simply browse the web for their entertainment and information.

Adding further weight to the belief that Murdoch has finally spotted a solution to leveraging content for cash, uber-successful entrepreneur Richard Branson has also announced that he'll be launching an iPad-based publication, although it's not clear whether this will be solely ad-funded or if it will involve a subscription.

Branson's focus on the iPad platform ads a significant endorsement to the future of the device as an alternative to paper for "glossy format" magazine-type publications -- although somewhat different to Murdoch's tabloid pulp offerings.

Both men now face the task of finding innovative, skilled developers and content creators who can best leverage the extra interactivity and power that this new platform provides. The worst thing they could do is simply repurpose existing content from print or the web to the iPad. Fortunately, it would appear that neither intends to adopt such a crude approach.

I suspect that these two giants will be the first of many who embrace the iPad as the ideal transition platform and a great way to prepare for the inevitable arrival of low-priced full-color, full motion e-readers based on flexible display technology. The lessons learned through the developing of content and strategies for presenting content on the iPad will place such publishers in good stead to take the next step, once the technology is ready.

What does this mean for web-users who currently expect to find all the content they want on webpages?

Well I suspect that publishers like Murdoch will reserve their most valuable content for their iPad publications, where it can be leveraged to extract the maximum revenues by way of subscription.

Will that mean a decline in the quantity and quality of web-based content?

I doubt it. In fact we may see quite the opposite.

As the Murdochs and Bransons of the world shift to iPad, that will leave a void that there will be no shortage of other publishers willing to fill. In fact, the change may be quite refreshing.

Free lunches are still on the menu for internet users.

Friday, November 19, 2010

Is a download worth more than a disk?

This is the second column I've written this week about the fact that the Beatles catalog of music is now available on iTunes but this time I want to talk about the issue of value.

EMI, the company who has the rights to the Beatles music, has been a long-time hold-out on releasing this stuff to be sold on iTunes and I think I know why...

In fact, if you look at something as simple as the price, the reason for EMI's reluctance becomes clear.

To buy the Beatles music as a download from iTunes will (believe it or not) cost more than the price of buying the same tracks on a good old CD.

It would appear that EMI has demanded a very high price for each and every track and album that's sold on iTunes. Perhaps this has been the sticking point that has kept this music off the official iTunes download list.

So what do you get for your money that makes a downloaded version worth more?

Well nothing actually.

Buy the CD and you get something physical that you can hold up to the light and see pretty rainbow hues. Buy the digital download and all you get are some of the bits and bytes on your hard drive or a memory card re-arranged according to some magic that allows music to be extracted from them.

If that hard drive or memory stick suffers a catastrophic failure -- you've got to download it all again and possibly even pay again. You have "virtual" music.

However, if you feel inclined, you can copy your Beatles CDs and even rip them into the very same digital magic that allows the music to be stored on a hard drive or memory card.

So why on earth would anyone want to spend more on a download than a disk?

Perhaps it's because the downloads will become "collectible" -- seeing as how they're so new and mark the dawn of a new era in the distribution of this prestigious collection?

Ah... no.

You see, unlike your old Vinyl albums or even your store-bought CDs, a download doesn't come with anything that effectively identifies it as a "limited edition" and, even less fortunately, in most cases you can't resell them to another person. You're only buying a license for your own use. Sure, as an "album download" the iTunes version comes with files you can print to produce facsimiles of the album covers and notes -- but because you *can* print them they're hardly collectible are they?

So, whereas some of us, when we were poor students needing a few extra dollars to make ends meet, would rush down to the second-hand music store with our old albums and flog them for a few quick dollars -- those who buy their music from iTunes and other download services have no such option.

Now, can someone tell me again why, at least in the case of The Beatles, we're actually paying more for less with digital downloads?

Friday, November 12, 2010

Why you should bequeath your grandkids an iPhone

When the word antique is used, most of us will think: furniture, paintings, crockery and other products of centuries-past that now fetch considerably more than their original value.

For those like myself who were born into a world where computers were once only to be found in a few lucky universities, government establishments and large corporations - the concept of an antique computer seems somewhat incongruous.

However, anyone who was smart enough to hang on to one of those early microcomputers from the 1970s, especially any of the particularly iconic units, is now the proud owner of a rapidly appreciating asset that almost qualifies as an antique.

A great example of this is the rare Apple 1 computer which goes up for auction this month in London.

The bare-bones system is little more than a circuit board, cassette tape with some basic firmware and a handful of printed manuals, including a letter from Steve Jobs, along with an invoice for the princely sum of $666.

So just what is such an "old" microcomputer worth these days?

Well, despite the depressed economy, the auctioneers are expecting this piece of computer history to sell for between NZ$200,000 and NZ$300,000.

That's a rather stunning return on the original investment, don't you think?

Of course it helps immensely that there were only 200 of these systems ever made and the vast majority have long-since been consigned to the scrapheap by owners unaware of exactly what they were throwing away.

So what about those other antique computers that were so common over 30 years ago?

How much for a Commodore Pet? A TRS80 Model 1? Or perhaps a humble Sinclair ZX80?

Well chances are that if you have one of these, or similar machines carefully packed away, complete with manuals, disks and other assorted paraphernalia -- and if it's in pristine condition, you may also be sitting on a little goldmine of antiquity.

These days, even a "new in box" sample of the very first IBM PC would likely be worth a pretty penny to a collector of such things.

So, if you've just bought yourself a brand new Apple iPad, or have an original Apple iPhone in near-new condition -- don't ever throw it away when it comes time to upgrade. Box it up carefully and leave it to your grandkids. It may appreciate in value far more rapidly than cash in the bank and thus make a fine inheritance.

Saturday, November 6, 2010

Can TV improve our internet service?

Across the ditch in Australia, the battle between television and the internet may be about to take an interesting turn.

Studies have shown that for quite some time now, people have been turning off their TV sets in favour of spending time online, either engaged in social networking, watching YouTube videos or soaking up other content.

Many broadcasters have attempted to keep their grip on a rapidly defecting audience by serving up a growing range of previously broadcast material via the web. Here in NZ we've seen the same thing, with both TVNZ and TV3 delivering "on demand" TV programmes from their broadcast catalog.

However, in Australia, television may be coming to the rescue of those websurfers who are finding it difficult to get a high-speed broadband connection.

One prospect currently being considered is to use the huge number of TV antennas that will be come redundant with the switch to fully-digital broadcasting, as a method of providing wireless internet services.

While most densely populated areas will be better-served by existing DSL and fibre technologies, there are a good number of smaller centers with just a few hundred to a few thousand residents for who, the laying of new high-speed cables may well be uneconomic.

The CSIRO in Australia has already been trialing advanced wireless technology that promises speeds up to four times faster than conventional WiFi systems and which would turn those old TV antennas into a powerful link to the cyberworld.

The only catch is that the technology is, according to those developing it, still a couple of years away from commercial production -- by which time those antennas may have already been pulled down or more expensive fibre may have been laid.

The Australian NBN company has said it was examining the option but considered that for the time being, existing wireless technologies may be the only alternative for smaller, more remote provincial towns.

If eventually implemented, the CSIRO's system has the potential to be faster and cheaper than those interim options.

I wonder if, with the switch to digital TV only a few short years away here in NZ, if we might be able to use the Aussies technology -- which should be commercialised at just the right time.

There are plenty of small rural settlements around NZ which are capable of receiving a terrestrial TV signal whose TV antennas and TV repeater systems could be repurposed to the task of delivering high-speed *affordable* broadband.

It's either that or sell the frequencies relinquished by the TV broadcasters to the mobile phone companies.

Which option would you consider the best one?

Friday, October 29, 2010

Into every cloud a little rain may fall

One of NZ's leading ISPs has suffered an embarrassing failure which serves to highlight the risks associated with cloud computing

An unspecified number (believed to be in the hundreds) of Orcon customers have discovered that all the emails they believed were safely stored on the ISP's servers have vanished.

Despite the "best efforts" of the ISPs technical staff, it appears that recovering of the missing emails is not possible.

Perhaps the most common form of cloud-computing is web-based email and this incident serves to show just how vulnerable cloud users can be if the provider's backup and disaster-management strategies are flawed.

Although to most people, the loss of their archived web-based email is more of an inconvenience than a major problem, it is claimed that a number of the affected Orcon customers were businesses that now face losses as a result of the missing material.

With the concept of cloud computing infiltrating ever-more areas that were previously the domain of in-house systems, it is crucial that any data stored on the cloud is also backed-up locally "just in case".

While a loss of email messages may be an inconvenience, the loss (for instance) of a year's accounting information, including current invoices, could be crippling for any business forced to face such a disaster.

It would seem prudent to remind those who rely on cloud-based solutions that they should carefully read the contracts under which those services are provided and check that there is compensation available for losses that may result if the service or the data it contains becomes unavailable.

Even better, it would pay to choose a cloud-based provider who also delivers the ability to back up your own data in a format that can be exported to other systems, should the unthinkable happen.

In the meantime, at least some of Orcon's email users will be wishing they had worn their raincoats today.

Friday, October 22, 2010

The people's phone network

Today I came across a very interesting article on the Internet.

It describes a system called "cognitive radio" which, in this case, allows the provision of cellular mobile services on an uncontrolled part of the radio spectrum known as an ISM (Industrial, Scientific and Medicine) band.

There are a number of ISM bands, some of which are recognised around the world as areas where unlicensed transmitters can be used for almost any purpose, so long as they conform to some basic rules.

Perhaps the most well-known ISM band is the one which resides between 2.4GHz and about 2.483GHz (depending a little on the country you're in). This is the band used by Bluetooth devices, wireless video cameras, radio-control systems for models and even microwave ovens.

The system in the article linked to above works on the 900MHz ISM band which is a little more suited for long-range communications but which is not as universally recognised or implemented as the 2.4GHz one.

The big bonus of ISM-band systems is that they don't require specific spectrum allocations for services and therefore allow for much lower costs.

The downside of course, is that each user of these ISM bands isn't guaranteed sole use and therefore some clever techniques have to be implemented to cope with the inevitable prospect of interfering signals created by other users.

In most cases, the interference problem is pretty readily handled by the use of spread spectrum technologies. These allow many different systems to effectively share the same piece of spectrum without significantly affecting each other.

It wasn't too long ago that spread-spectrum technology was pretty much restricted to the likes of military and scientific uses -- the hardware required to create and decode spread-spectrum (SS) signals being complex and expensive. However, as is always the case, advances have meant that SS is now very widely used.

WiFi systems are inevitably SS-based, using a technique known as direct-sequence spread spectrum (DSSS) transmission.

Another widely used SS technique is frequency-hopping spread spectrum (FHSS) which offers its own set of benefits.

Here is an article that explains spread spectrum and the various flavours thereof in a little more detail.

But back to this "cognitive radio" concept...

Some time ago (in another blog) I suggested that the 2.4GHz ISM band would be a great place to create a P2P cellular network whereby the concept of having fixed towers would become redundant. It appears as if the cognitive radio system is part-way there.

Although ISM-based P2P is unsuited to voice, its potential as a method for handling non-realtime information, such as text messages is huge.

My own experiments indicate that as little as 60mW transmitter power when used with an SS system and a small antenna suitable for hand-held devices, can deliver a range of up to 6Kms, depending on terrain and surrounds. It's clear therefore that such a system has a huge potential as a fee-free P2P SMS network, should suitable hardware become available.

Whether or not we actually see a "people's phone network" remains to be seen -- rest assured however, that the technology is here and people are getting interested already.

Friday, October 15, 2010

broadband(its)

Both New Zealand and Australia are working on rolling out national ultra-fast broadband networks, so it's interesting to compare the similarities and differences in the strategies being adopted by two similar countries.

As Kiwis, we really ought to hope that our political overlords aren't planning on following the Aussie example -- or those who don't want or need broadband could be paying a hefty price for opting out.

If our Aussie cobbers don't want an NBN connection and live in Tasmania, they will indeed have to opt out because being "connected" will be the default. What's more, if Aussies do opt out, it will likely cost them a whopping $300 to be connected to the NBN at a later date.

Australian states are also revising their trespass laws so that workers can access sections and dwellings for the purpose of establishing NBN connections without having to gain the owner's permission first.

Another problem is that those who choose not to connect to the Australian NBN when it's launched will eventually find that they have no option, once the existing copper network is decommissioned.

Here in NZ, things aren't nearly as clear.

Our UFB plans are still somewhat liquid but, given the cost of creating such a huge network, it's likely our politicians will give equal consideration to "imposing" it on consumers, whether they want it or not.

The risk that both Australia and NZ (if it follows the same path) face is that when faced with the prospect of connecting to the NBN or being disconnected completely, a growing number may opt for the latter.

Those who don't need high-speed broadband may decide that it's simpler just to rely on mobile technology for their communications.

And that's if some local authorities don't scuttle the national initiative first -- as seems to be the case in Brisbane, where the council has decided that the NBN will take too long to reach its city. They're planning their own $600m network using fibre laid in stormwater drains.

Here in NZ, where the timeframe for implementation is even more tenuous, it's quite possible that in highly populated areas, other independent providers may choose to create their own small broadband networks in advance of the UFB network.

The only thing that is totally clear from observations of Australasia's attempts at implementing nation-wide high-speed broadband networks is that nobody's really too sure exactly how they're going to achieve the goals they've set for themselves.

And, however it's done, there will be costs involved and those costs will be passed on to consumers. If you're Australian, that'll be whether you like it or not.

Friday, October 8, 2010

Microsoft patches wormhole -- maybe.

One of the good things about Microsoft's Windows operating system is that it is normally configured to update itself over the Internet.

One of the bad things about Microsoft's Windows operating system is that updates required to address severe vulnerabilities in its software are sometimes far to frequent for comfort.

And next week, Windows users can expect a larger than normal payload of patches to be automatically downloaded and applied to their Windows-based PCs.

According to advance reports, a staggering 49 security vulnerabilities will be addressed by the upcoming "mega-update", and some of the holes being fixed are listed as "critical".

One of the biggest factors in creating this swathe of updated code is the now infamous Stuxnet worm.

It is claimed that Stuxnet is malware which was specifically designed to infiltrate PCs being used for control and monitoring applications within industry. While most worms have been written primarily to pluck potentially valuable data such as passwords and credit-card numbers from desktop PCs, Stuxnet seems to be a completely different kettle of fish.

By attacking industrial computers and those which often operate in a dedicated role, Stuxnet has sent shockwaves through the SCADA (supervisory control and data acquisition) industry and brings home the growing vulnerability that might arise when relying on such popular software platforms for critical systems.

News reports indicate that one of the biggest infections has been within computers used by the Iranian nuclear industry.

There is also intense speculation that Stuxnet may be a tool created specifically to infiltrate the Iranian Bushehr reactor and glean clues as to whether it is being used for the enrichment of fuel for a possible nuclear weapons program.

If that is the case then the authors of Stuxnet may even have been written by the US government to achieve this goal.

What ever the reasons or role of Stuxnet, its days may be numbered -- after next weeks huge patch-payload from Microsoft.

Unless of course, Microsoft has been asked by those fighting "the war against terror" to leave just enough of an open "window" to allow them to keep peeking inside the Iranian's nuclear developments.

Friday, October 1, 2010

Beware the rogues and scoundrels

I have about half a dozen different domains and websites on the Internet and every year I have to renew the domain-names in order to keep those websites visible to the rest of the world.

All my dot-com names are registered through a single company who has to date, provided me with excellent service at a very reasonable price - so I see no need to change.

I'm also lucky that, having written about scams and rip-offs on the Net for over a decade and a half, I recognise when someone's trying to "pull a fast one".

And so it was when just today I received another domain name renewal invoice in the mail.

The invoice, complete with tear-off remittance advice slip, offered me a 1, 2 or 5-year renewal of my domain -- plus the chance to register the .net and .org versions of that name.

Now, if I were just an accounts payable clerk in a busy company I would probably have checked that the domain name involved actually was the one used by that company and that it was indeed due for renewal and then written out a cheque or scheduled a payment.

That would be a big mistake.

Why?

Because this wasn't an invoice for the renewal of my domain name from the company through which I registered that name at all. It was an invoice from a completely separate company which, through cunning and deception, hoped I would indeed be a busy accounts-payable clerk who'd just pay all the same.

Sure, if I'd paid this invoice my domain name would have been renewed -- but it would also have been transfered away from the company I prefer to deal with and may have even placed the visibility of my website in jeopardy.

Just as bad, this attempted hijack would have also seen me paying three times the price I currently pay.

The company doing this is The Domain Renewal Group and I can't issue a strong enough warning to steer well clear of this crowd.

Their letter is headed "Domain Name Expiration Notice" and says:

"As a courtesy to domain name holders, we are sending you this notification of the domain name registration that is due to expire in the next few months".

and goes on to say:

"You must renew your domain name to retain exclusive rights to it on the Web"

"Failure to renew your domain name by the expiration date may result in a loss of your online identity making ti difficult for your customer and friends to locate you on the web"

Then at the bottom:

"Please detach this stub and include it with your payment"

This is an age-old tactic and is very close to a proforma invoice scam. In my opinion, any company which would resort to such tactics really ought not be trusted with something as important as your domain name registration.

So, if you're part of a larger organisation, please check that your accounts department are aware of this scam and that they make sure to double check any invoice for domain name renewal actually comes from the company that provides your service.

Of course if you're a sole trader or smaller business, you still need to be vigilant because these renewal notices are so cleverly worded and so "invoice-like" that a moment's inattention could cost you a lot of money.

Spread the word.

Friday, September 24, 2010

Your web surfing privacy gone in a Flash

Privacy is a big issue these days when it comes to using the internet.

With identity theft on the rise and increasing numbers of web-based networks wanting to track your every online move, people have become very conscious of the need to keep a weather-eye on just what cookies are stored on your computer.

As most Net-savvy people know, a cookie is a small set of data that is used to "tag" your browser so that it can be recognized as a unique computer as you move around the Net.

Many websites issue cookies that identify you solely to make life easier -- by automatically logging you back on when you return or by keeping track of other information specific to your online identity.

However, with the increased use and abuse of cookies, most web-browsers now offer the ability to surf in "privacy mode" or similar. When this happens, the cookies that websites send to your computer are only kept for the current browsing session then automatically deleted.

Most browsers also offer a very simple way to peruse and delete any cookies that might have previously been sent to your computer.

That all sounds fine and dandy doesn't it?

Except for one thing... Adobe's Flash.

Flash is a plug-in which provides some wonderful extra functionality when browsing the web. Animated graphics, enhanced interactivity and and other features are the reasons why web-designers have flocked to Flash in their droves. These days it's pretty hard to find a website that doesn't include at least a few Flash-based applets on its pages.

Unfortunately for those who like to protect, or at least monitor their privacy, Flash doesn't play by the same rules as your browser.

Even with your browser in "privacy mode", the Flash plugin continues to accept, store and regurgitate its own cookies when requested to by websites you might visit.

Clearly, given how ubiquitous Flash has become, this scuttles the whole utility of any "privacy mode" that might exist on your system. What's more, because Flash has its own stash of cookies, you can't even tell what has already been saved to your machine by asking your browser.

So how do you avoid falling victim to Flash's disregard for your privacy?

Well you can fit a Flash-blocker plug-in which will disable all Flash applets encountered on webpages you might visit. You can of course activate those applets if you need or want to but some of the more sinister ones which are responsible for issuing and requesting the cookies are sometimes invisible and need not be running in order to use the pages concerned.

Another way which it is hoped will become practical very soon, is to dispense with Flash altogether.

These days, many of the features that were once the sole domain of Flash are now becoming available through the latest extensions to the HTML standard. Embedded video, clever animation and improved interactivity are now all possible without recourse to Flash so eventually it is hoped that web-designers will leave Flash in the dust and move on.

There's also hope that Adobe's promise to deliver a mechanism for allowing users to control and examine the cookies deposited on their machines by way of Flash will also be fulfilled.

In the meantime -- do you really know who's keeping tabs on your web-browsing?

Friday, September 17, 2010

Blu-ray DRM: epic fail!

Everyone knew it would only be a matter of time before the strong encryption used to lock-up the content of Blu-ray disks would be broken.

It happened years ago to the CSS system used by the humble DVD and now, by virtue of a similar "leak" of important information, it has happened to today's HD video standard.

Now hackers have all the clues they need to craft their own software to decode the otherwise unintelligible stream of data that is stored on Blu-ray disks and which passes through the HDCP (High Definition Content Protection) connections (such as HDMI, DVI, etc) between various system components.

Experts believe the most likely outcome of this event is the creation of a mod-chip that will decode the otherwise encrypted streams, allowing ready "ripping" of HD content stored (and allegedly protected) on Blu-ray disks.

Rumors of the leaked master-key were rife on the internet as early as Tuesday but confirmation that the master key really had "escaped" came today when Intel issued a statement saying "We have tested this published material that was on the Web. It does produce product keys... the net of that means that it is a circumvention of the [HDCP] code."

Opponents of DRM tout the leak as another example of how DRM does nothing but handicap consumers with higher prices and reduced flexibility in how they can use the products they purchase.

Now that the genie is out of the bottle, so to speak, there will soon be a plethora of code available to make ripping Blu-ray content a trivial process.

Legitimate disk owners will justify such activities by claiming they have a right to back up the movies they've bought and paid for -- pirates will rejoice in having ready-access to HD content without paying the prices being asked by the movie studios.

Intel, the creator of the HDCP DRM system has advised clients that they may now have to resort to legal action to enforce their rights as copyright owners, if examples of piracy in the wake of the key leak are detected.

This commentator wonders when the studios will wake up to the fact that they're wasting an awful lot of money trying to protect their product in a manner that will always be doomed to failure. Perhaps they should learn from the Music industry which has begun to ditch the DRM on its wares in favor of reasonable pricing and ready availability.

Friday, September 10, 2010

Apple feels the android burn?

There have been a number of reports published of late which indicate that Android-based personal devices are becoming a favourite with consumers.

Sure, we'd all love a lovely new iPhone or iPod but more often than not, the equivalent device from other vendors is cheaper and (gasp!) occasionally they're even more capable.

Now I can't help but wonder whether the guys at Apple have been reading these reports and talking in hushed tones about the way that Android-based systems appear to be reaching critical mass with astonishing alacrity. They know that if the market becomes enamoured with these devices, sales of their iStuff could take a hit.

They're also worried that developers may see more bountiful pickings to be had by producing software for the Android market and opting to give Apple's tightly controlled product a wide berth.

So just how worried are Apple?

Well hold onto your hat... Apple have finally decided to allow developers to use a raft of development languages and tools that were previously forbidden.

In fact, even that most evil of all tools, Flash, is now no-longer persona non gratis -- at least as a platform for delivering games and other applications. Word on the street however, is that you still won't see any browsers on the iOS which support Flash embedded in webpages.

So where as previously, Apple was incredibly strict in dictating exactly what languages and tools were allowed when creating apps, they've now backed right away - to the point where almost anything goes.

No doubt developers will be very happy with this change, as will users of Apple's products.

Now, only time will tell if these concessions on the part of Apple are enough to knock the wind out of Android's sails. Does Android already have the momentum needed to give other device manufacturers the key benefits they need to deliver better hardware and software solutions than Apple.

I can't wait to watch for what happens next.

Friday, September 3, 2010

Report: Alien life has reached earth (or maybe not)

I bet you're probably thinking I've lost an oar and am now writing science-fiction instead of a technology column but, if reports on the science-news wires today are correct, earth may have experienced a visit by alien life forms at the start of this decade.

The reports suggest that the appearance of hitherto undiscovered microbial life-forms which remain completely inert at temperatures below 121 degrees C may have fallen to earth from space in an event that was reported as "red rain" back in 2001.

It was initially thought that the red rain was simply the result of dust or lichen spores of a terrestrial origin -- however there have been a small group of those who consider that the tiny cell-like objects contained in the rain came from outer space. This suggestion was given further weight when observers reported a bright flash and loud boom shortly before the red rain began to fall.

A couple of years after the event, physicist Godfrey Louis published a paper in which he proposed that the red rain was caused by debris from an exploding comet or meteor as it entered earth's atmosphere.

At that time, Louis published another paper with Santhos Kumar in which he claimed he had managed to get the "cells" contained in the red rain to reproduce at temperatures approaching 300 degrees C. The pair published another paper in 2008 in which they restated these claims.

And now, like clockwork, the pair (plus several other physicists) have published another paper detailing further research and observations which have been made on the mystical red cells.

Oddly enough, despite the huge implications of such a find, nobody outside of Louis and Kumar's team have verified the strange behaviour of these possible alien invaders - which must leave all but the most enthusiastic sci-fi reader somewhat skeptical of the conclusions they have drawn.

However, if it does turn out that the red rain was indeed an alien invasion, one can't help but wonder if climate change will eventually lift temperatures to the level these cells need to "come alive" and colonise the planet. Of course we won't care because by then, mankind will have well and truly vacated the scene.

Perhaps panspermia is for real after all.

Friday, August 27, 2010

The silent enemy within

News broke this week which revealed that back in 2008, the US Pentagon was infiltrated by a virus which made its way onto supposedly secure systems by way of an infected USB drive.

Before this malware was detected and eradicated, it's possible that confidential data had already been transmitted to persons-unknown, leaving the US administration with egg on its face.

As a result of this event, significant changes have been made to the protocols surrounding the use of such devices on Pentagon and other important government networks, with assurances given that a repeat of this fiasco would be most unlikely.

However, some are not so sure.

When an array of Western companies found their computer systems under attack earlier this year, it was reported that those attacks came from China, most likely sponsored by the Chinese Government. It is sobering therefore to realise that a surprisingly high number of peripheral devices and the chips contained in them are now manufactured in China.

While it has been fairly easy to mitigate the risk posed by malware infected USB drives, how can any organisation be sure that similar threats are not quietly lurking inside the chips which perform otherwise mundane tasks in components such as network cards, disk controllers, modems, routers, etc?

If any government or other organisation was seeking to gain access to data stored on computers used by Western governments or businesses, it would be quite feasible to insert the required trojan programming into the micro-code used in these specialist chips and activate it in response to a specific trigger.

The fact that we haven't seen any instances of this to date is no indication that it hasn't already been done. Such attacks may have not yet been detected and even if they were, it's quite possible that just as in 2008 at the Pentagon, they have been hushed-up so as to avoid embarrassment or to make it easier to determine where the data is going and who is responsible.

As the demand for some components grows at a pace which seems to outstrip the primary manufacturer's ability to supply, a door is opened to second-tier suppliers who may be willing to allow a little extra code to be added in return for a significant back-hand payment from nefarious parties.

It will be interesting to see what unfolds in the months and years ahead -- but when it does happen, remember this column.

Friday, August 20, 2010

Intel, what were you thinking?

This just in -- Intel has spent a huge amount of money (some US$7.7 billion) buying security software company McAfee.

What were they thinking?

Even seasoned analysts are left scratching their heads, trying to understand how this acquisition could be of strategic value to Intel, who's primary business is computer processors and support chips.

It has been rather entertaining to read the various speculation that is taking place within the media right now as to the why's and wherefore's of this move.

For example, the BBC reports that "Intel intends to build security features into its microprocessors which go into products such as laptops and phone".

ZDNet seems to think that the move is doomed.

BusinessWeek seems to think the purchase is an attempt to gain more penetration into the handheld and embedded markets where Intel has performed relatively weakly to date.

But only CNN seems to be on the money by pointing out that the takeover has no apparent rationale other than as an opportunity to invest some of its cash reserves. They suggest that Intel maybe doing this for no reason other than a pure investment strategy - perhaps technology doesn't even come into the equation.

Others, like myself however, wonder about the good sense of investing in an anti-virus software company that now faces such stiff competition and which seems to be slipping in the market. For many years, McAfee has been seen as the benchmark for AV software but in recent times its performance has come under question and many new players are nipping at its heels. Nowhere is the effect of this decline more obvious than in the price of is stock, dropping from over $45 just under a year ago to around $30 at the time of the Intel purchase.

Can Intel breathe new life into McAfee?

One would like to think that having invested $7,700,000,000 in the company, they will be working very hard to reverse any decline and revitalise the company's fortunes. Only time will tell.

In the meantime, this does reshape the landscape a little for all computer users -- albeit we may not see the effects of those changes until someone really works out why Intel really made this purchase.

Friday, August 13, 2010

A new spin on memory technology

This week I received an 8GB microSD memory card that I'd ordered from China for the princely sum of about US$20.

It came in a simple bubble-wrap envelope carrying just a dollar's postage and the tiny black shard of plastic is smaller than my thumbnail.

Yet, amazingly, this tiny card can hold the equivalent of about four million A4 pages of text (a stack about 500m high).

If you look at the difference in data-density between that huge stack of paper and the memory card, the comparison is absolutely astonishing.

When I built my first memory card back in 1977, using discrete 128 byte (not K-bytes but BYTES) RAM chips, I simply could not have imagined how much smaller and cheaper this essential technology would get within the space of just over three decades.

And, if you think today's micro-memory cards are already small, get ready for the next step in miniaturising and improving memory technology: spintronics

Whereas today's memory relies on trapping electrical charges in insulated "wells", spintronics takes things down to a whole new level where the spin of individual electrons determines the state of a memory bit.

Through the use of this spintronics technology, the size and energy requirements of our memory should fall significantly while speed will actually increase.

Spintronics technology relies on the interaction of magentic fields and spinning electrons to do its magic and right now, researchers are experimenting to find the best combination of materials to use.

One of the first companies to try and commercialise this technology is Grandis, a company that presently holds a swathe of patents in the field of spintronics.

Exactly when (or even "if") we'll be able to buy memory devices based on this technology is unknown at this stage but, if it lives up to its promise, the dream of smaller, faster, more powerful computing will be taken to the next level.

Of course there are no guarantees that this cutting edge technology will make it to the starting line. I can't help but think back to an article I read in an issue of Byte magazine a little over 30 years ago which predicted a strong future for magnetic bubble memory.

Does anyone even remember that technology now?

Friday, August 6, 2010

Why? Because we can

Technology enables us to do some amazing things.

Sometimes we do these things because we need to, sometimes it's simply because we want to.

Take the humble mobile phone for example. Most of us need to use this device in order to maximise our efficiency when working. Not being tied to a land-line enables many people to coordinate their appointments and schedule their day while on the run.

We also use mobile phones because we like to. When we've got a few minutes to spare we can take the chance to check and see what's on TV tonight or maybe ring a friend for an overdue chat.

Most people have the same need/want relationship with their computers. These days we need them to do important tasks such as paying bills by internet banking or sending and receiving important emails. We also like to use them to play games or browse entertaining material online.

Another "want to" application for the humble computer has just been demonstrated by a 54-year-old Japanese engineer who decided to break the world's record for the precision to which the value of pi has been calculated.

Very few people have any actual application for calculating pi to this level of accuracy but Shigeru Kondo obviously thought this was a worthy challenge and set about doing so, for no other reason than to prove that he could.

Prior to Mr Kondo's record-breaking attempt, the value of pi had only been calculated to 2.7 trillion decimal places. Now we know its value to 5 trillion places.

Just how long is a number containing 5 trillion decimal places?

Well, if it were printed out in a single line of 12-point fixed-pitch type, it would be (according to my possibly flawed calculations) some 20 million kilometres in length.

Perhaps because of this, Mr Kondo hasn't printed out his result but it is instead stored on the 20 hard drives of his custom-assembled Windows-based PC.

While I suspect we must question just why anyone would take it upon themselves to perform such a remarkable -- yet seemingly pointless task, perhaps the greater achievement this reflects is the fact that we can.

It is really quite mind-boggling when you think that the power to achieve this incredible feat is now easily within the grasp of anyone who really wants to do it.

Sometimes, technological challenges are just like climbing Mount Everest. Why do we do it? Because we can.

Friday, July 30, 2010

When the last number has gone, what then?

The internet is a complex maze of origins, pathways and destinations through which packets of data have to be directed.

Whether it's email, web-traffic, file downloads, video or VOIP, all of this data relies on the existence of IP numbers so as to sort out which turns to take and where the packets are to be finally delivered.

When the Al Gore first invented the internet (yeah, right), there wasn't any problem. IP numbers were plentiful, in fact there were about 4 billion of them to go round and in the early days there simply wasn't much demand. Indeed, plenty of people would have scoffed at the suggestion that one day we'd run out of these numbers.

Well bad news.

That day is now less than a year away.

At the current rate of allocation, we have a little over 300 days worth of IP numbers left.

The problem is that people just keep on adding stuff to the internet at an ever-increasing rate.

It's not just computers either. There are a growing number of "smart" devices that hook up to the Net, and each requires an IP number. The webcams which keep an eye on traffic - they need IP numbers. The webserver on which this site is located -- it needs an IP number (perhaps many in fact) and every time you log on to do some browsing, you also need a number.

So what's the problem? Surely they can just make some more numbers can't they?

Who says that 4 billion is the limit?

Well unfortunately, because of the way current IP numbers are formed, we can't just print more of them.

Right now, IP numbers consist of 4 bytes and are usually expressed in the form nnn.nnn.nnn.nnn, where each 'nnn' is a number from 0 through 255. Once we've used up all the numbers from 0.0.0.0 to 255.255.255.255, there are no more -- unless we switch from 4 bytes to more.

And indeed that's just what's planned.

The current system is based on what's known as IPv4. A new system is slowly being rolled out called IPv6 which uses four times as many bytes to create the numbers needed.

IPv6 will offer a massive increase over the humble 4 billion numbers we already have -- in fact it's a mind-bogglingly huge number -- but only a fool would say that it's more than we'll ever need. However, I think it's safe to say that by the time we run out of these numbers, the internet will have evolved to something far more sophisticated and will probably use a totally different addressing system.

So what's the problem then... let's just roll out IPv6 and start handing out new (bigger) numbers before the IPv4 ones run out.

Unfortunately it's not quite that simple. In order to use IPv6 a whole lot of expensive equipment has to be replaced or significantly upgraded -- and that's going to cost, a lot!

ISPs, telcos and others who are responsible for the routing of data around the place will be looking at big bills to become IPv6 compatible and that could mean higher prices for the rest of us, as they pass those costs on.

There are interim solutions that may postpone the fateful day when the last IPv4 number is allocated but some of those techniques will break popular internet applications. Examples of this kind of thing have been around for a while, with many web-hosting companies managing to squeeze hundreds of websites onto the same IP number.

In the meantime, the next 12-24 months will be "interesting times" for the internet industry as they struggle to update and hone their systems to work with the new, much larger IPv6 addresses.

I'll revisit this issue in a year's time, let's see how they've gotten on by then.

Friday, July 23, 2010

Publishers typing with fingers-crossed for iPad rescue

Newspapers, magazines and other print-media publishers have been hard-hit by the abundance of free material made possible by the Internet.

When I think back a decade or two I recall that I subscribed to at least a dozen monthly periodicals which were either delivered to my mailbox or kept for me by my local bookshop.

I would eagerly await the arrival of Byte, DrDobbs Journal, Scientific American, Australian PC, RC Models and Electronics, plus a number of other titles which contained a plethora of interesting articles and an avalanche of advertising.

They were good times for the print publishing industry with advertisers' dollars flowing in like water.

Oh how things have changed!

Now it would be a very brave (or stupid) person who opted to launch a print-based technology magazine (or any magazine for that matter) in today's "connected" world.

Long delays between the time that an article is written and when it finally reaches the reader means that print is all but dead for tech-industry news publications. Readers expect to be able to log in on a daily basis to get the latest news and happenings. The normal four to six week turn-around for print is no longer acceptable.

Many print publishers have done their best to move their periodicals to the web but few are making anything like the money they did in the pre-IP era. Online advertising just doesn't command the same price as print advertising and advertisers can see just how (in)effective their ad-spend is today, something that often surprises them, and not in a good way.

However, the launch of the Apple iPad in NZ this week has seen print-media publishers getting very excited. It's almost like the second coming, the buzz is so great.

At last, there is now a platform that offers the ability to deliver a real print-magazine format without the limitations of online browsing.

When used as an e-Mag reader, the iPad allows users to enjoy full-page colour ads and longer, in-depth articles with almost the same comfort and convenience as offered by a glossy magazine.

This re-opens the doors for those magazine publishers who were worried where their next meal was coming from.

Now they can go back to a subscription model, charging readers for each issue downloaded and commanding a new premium for the ad-space contained in those iPad editions.

Or at least that's the plan.

Whether it actually works or not, a surprisingly large percentage of mainstream print-media publishers are jumping aboard with all fingers crossed.

Whether readers will warm to the idea of once again having to pay for content is another question.

Once the novelty wears off their shiny new iPad, will they just go back to a PC and web-browser for their news, information and entertainment? Or will they continue to stump up cold, hard cash for the iPad e-version of their favourite print magazines?

I really don't know.

Will the iPad audience be large enough to support this publishing model?

So many questions, so few answers -- yet.

Would you pay for the iPad version of a periodical you currently get for free online?

Would you pay for the iPad version of a print publication you currently subscribe to or buy regularly from the local dairy or book shop?

Friday, July 16, 2010

The technologies that nobody really wants

Have you heard of video-phones?

These have been a standard part of our vision of the future ever since the telephone itself was invented.

Futurists and Scifi writers have predicted the arrival of the video phone for decades and now we have the technology to deliver this service but, it seems, nobody really wants it.

Oh sure, a few people play around with Skype's videophone service and perhaps even use their mobiles to exchange live video while making calls but, once the novelty wears off, most go back to good old voice-only conversations.

It seems that video-phones are just one of a long line of technologies that should be outrageously popular but which in fact, nobody wants.

Need another example?

What about voice-controlled computers.

As far back as the 1980s, companies like IBM were predicting a future where the human voice was the primary interface mechanism between man and machine. A number of computer manufacturers even shipped machines with speech synthesizers in them. I remember well the Bondwell portable computer I owned in the early 1980s which had just such a feature.

After playing with this feature for an hour or so it was turned off and never used again. Today's computers no longer have built-in speech synthesizers. What does that tell you?

IBM themselves invested enormous amounts of money into voice-recognition, in the hope of producing a system that would allow non-typists to simply dictate their letters directly into a computer's microphone and have them then pop out the printer, spell-checked and immaculately formated -- perfect in every way.

This still hasn't happend, even though voice recognition systems are now more than capable of providing very high levels of accuracy.

It seems we live in a world full of solutions that are still looking for problems.

Perhaps there's a sage lesson there for any budding entrepreneur who thinks they have an idea that will make them a fortune.

Just remember, it's better to actually research your market and found out what it needs and wants, before you assume that your idea will sell.

In the meantime, I'm still waiting for my fusion-powered flying car to be delivered.

Perhaps tomorrow.

Friday, July 9, 2010

The Chinese are invading

This week, an awful lot of my time seems to have been taken up dealing with matters related to China.

Just a few decades ago, nobody really considered China to be much more than a 3rd-world country, with subsistence farming and small quantities of low-tech exports making little impression on the world's economy.

My, what a difference a few decades can make.

Today China is a massive industrial and economic superpower, exporting huge amounts of low and hi-tech products to eager markets around the world.

A quick check of the home appliances and electronic devices around your home or office will quickly show just how much we have become reliant on this previously sleeping giant.

And now China is already taking another huge step towards filling the world with its plethora of products...

They're selling direct via the internet.

Until recently, the Chinese followed traditional business practice, selling their wares to importers around the world, who then sold to retailers, who sold to end-users. That is changing, and changing very rapidly, thanks to the Net.

I'm not here to promote commercial websites but I recently spent almost an entire day browsing the incredibly comprehensive aliexpress.com site.

My goodness.... there is a bewildering array of products there and they're all available for purchase over the web, at typically Chinese prices.

Whether it's farm machinery, MP3 players, electronic components, metalworking machines, raw materials or services -- they're all there and all just a few mouse-clicks away from your door.

Until recently, there have been many sites that showcased Chinese products and companies on the Net but most of those had no e-commerce capabilities, simply acting instead, as a generator of sales-leads, requiring buyer and seller to transact business directly between themselves. These newer sites however, actually handle the transaction and even offer an escrow service to take much of the risk out of such purchases.

To be honest, I don't think I'd be investing in bricks and mortar retail to any degree, now that the Chinese are getting serious about making their products directly available to anyone with a credit-card and internet connection. There are challenging times ahead for the traditional whole-sale/retail distribution model, that's for sure.

As someone who's always looking for a bargain, I can see that I'll inevitably be wasting far more time trawling these websites for a good deal. Come to think of it, when I factor in the time I spend doing this, the total cost may not be such a bargain after all.

However, I think it's fair to say that the Chinese invasion is well underway. Be prepared. Go place your credit card in an ice-cream container filled with water and stick it in the freezer. That'll help reduce the chances of an unexpected "impulse purchase" while you're trawling the pages of these new Chinese online superstores".

Friday, July 2, 2010

We're falling for gyros

Back in the early days of aviation and aerospace, gyroscopes were bulky mechanical devices which, despite their size and weight, were amazingly fragile.

Anyone who's ever played with a spinning top understands the basics of a gyroscope.

One of the most common demonstrations of gyroscopic stability is a childs spinning top.

When not turning, the spinning top will never balance on the narrow point that is its base. No matter how carefully you try to get it balanced and level, when you release it, it quickly falls to the side until it is resting on that point and the edge of the disk.

However, spin that top up and it will quite happily sit, steady as a rock on that tiny point in the center. Even if you give it a good shove, it will still maintain an upright stance and continue to pivot on its base.

The wonders if gyroscopic force go far beyond a simple child's toy however. Gyroscopes have long been at the very cornerstone of our ability to build and fly aircraft, rockets and orbiting satellites.

And now, thanks to the inexorable tide of miniaturisation and advances in solid-state technology, gyroscopes, and their close cousins called accelerometers, are now built into a growing number of consumer electronic devices.

What's the difference between an accelerometer and a gyroscope?

An accelerometer detects an acceleration (such as the force of gravity or a change in the speed of an object) in a single plane. A gyro detects an change in angular movement (rotation).

To draw an analogy -- let's say you mount a gyro and an accelerometer on your head...

Now, if you are standing and you crouch, then return to standing position, the accelerometer will have been able to measure the forces that were created by that movement and when connected to a suitably programnmed computer, could even produce a graph that ploted the vertical position of your head against time. However, the gyroscope would not have even noticed the movement because there was no rotation involved.

Yet, if you twisted your body and head around so as to look behind you, the accelerometer would read nothing but gyroscope attached to your head would be able to tell you just how many degrees it had rotated and (with the right computer/software) could plot a graph of your head's rotational position against time.

So just how are gyros and accelerometers being used these days?

Cameras with "Optical Image Stabilisation" which promises to reduce the effects of camera-shake, usually contain either gyros (to detect the small rotations of the lens-angle when your hand wobbles) and/or accelerometers (to detect vertical/horizontal displacements of the camera). The signal from the movement sensor is then used to adjust the angle of a small mirror or prism to compensate for the movement, effectively eliminating or greatly reducing the effect of that camera-shake.

If you've got an iPhone or iPad then you'll know that these devices have accelerometers in them that allow the display to recognise whether it's being held in landscape or portrait mode.

Users of the Wii console will be very well acquainted with just how accelerometers can convert the direction and speed of a movement into a game-input.

And of course, we've all see the Segway which relies heavily on gyros and accelerometers to maintain its upright position and allow control inputs to be made simply by leaning forwards or backwards.

So where are the big, heavy spinning wheels that used to make up the gyros of yester-year?

Well they've been replaced by tiny etched slithers of silicon which vibrate at a high frequency and in the case of a gyro, rely on something called Coriolis Effect.

Accelerometers use a small strip of silicon which is bent by the force of acceleration, effectively changing the gap between it and surrounding bits of silicon. When a high frequency signal is passed through this arrangement, the amount of deflection (hence the acceleration) can be very accurately measured.

Both these bits of clever stuff are what's called MEMS devices (Micro Electro Mechanical Systems) and form part of a family known as motion sensors. Most of these devices are smaller than a fingernail, making them ideal for today's ultra-compact consumer electronics.

They're already making huge (often covert) inroads into our hi-tech appliances and you can expect to see a lot more features and functions that rely on them as we wake up to the potential they offer to clever and innovative technology designers.

Friday, June 25, 2010

Serendipitous collisions

A really good idea is nothing more than the collision of a problem with a solution.

When the timing of that collision is just-right, wonderful things happen and a lot of money can be made.

A great example of this is a BNZ employee who was tasked with the job of thwarting the activities of card "skimmers" -- those evil little sods who secretly read the magnetic strip on your EFTPOS or credit card then transfer it onto a blank card so as to effectively create a perfectly functional clone.

Possibly because New Zealand has been rather late to the part on implementing chip-based cards, New Zealand has become a popular place for card skimmers to ply their trade. A while back there was a spate of ATMs which had been covertly fitted with card-skimmers and tiny cameras to catch not only the data on a card's magnetic stripe but also the PIN number used to access that card.

Shortly after this, banks fitted "anti-skimming" devices to their ATMs which were designed to thwart the simple devices being used by skimmers. Sometimes these work but reportedly, sometimes they don't. Besides which, there have been plenty of other ways in which those intent on capturing the gold hidden in your card's magnetic strip can obtain that data.

Well the nice man at BNZ was puzzling how to overcome card-skimming until such time as all Kiwis were using chipped plastic and came up with a really bright idea...

Why not alter the information each time the card is used -- effectively creating a digital fingerprint on the card that was changed after each transaction?

Sheer brilliance -- so long as the card reader was also a writer.

It means that each time the card is used, its fingerprint is compared with the one on file and if they don't match then it's clear that the card is a clone which is now a transaction or two out of date. What's more, even if the card skimmers get in first and make a transaction before the bonafide cardholder, he will be alerted when his own card is rejected at the next point of sale or ATM.

The idea was patented and now this chap's praises are being sung far and wide by the media. Well the NZ media anyway.

Now I bet a fair number of people reading this are saying to themselves "I could have thought of that, it's so simple". The truth is that yes, most of us could have though of it -- but we didn't. That's why this chap is a minor celebrity and stands to make a handsome amount of money from licensing, whereas the rest of us don't.

Now there are other problems that exist in the world for which a good solution is worth gold.

Take the problems BP are facing over their leaky well in the Gulf of Mexico.

Apparently there are tens of thousands of good ideas flooding in (via the web) to BP's troubleshooters. In fact, reports indicate that there are just too many ideas being submitted by optimistic "ideas guys" and BP simply don't have the time or resources to check out the viability of them all (come on, if they spent enough money they could).

Amongst those ideas are a few Kiwi ones which, on the surface of it, seem pretty good.

I wonder if they'll turn out to be as lucky as the guy from the BNZ.

There's a belief that everyone has a million-dollar idea at least once in their lives. The difference between those who make a million and those who don't is usually down to actually recognising the value of that idea and having it at the right moment in time.

Unfortunately, for the vast majority of us, we may have the idea but the timing and ability to leverage it into cash is never quite there.

In the meantime, let's hope that clever Kiwis continue to have those good ideas and, at least a few of us have the abiilty to turn those clever ideas into large amounts of cash.

Congratulations to the guy from the BNZ. I hope is idea makes him very wealthy and provides us with valuable extra protection against card skimmers.

Friday, June 18, 2010

A nice gesture

I wrote a while back about how there was some biffo going on between Apple and other companies over the use of "gestures" such as "pinching" and "stretching" on the iPad/iPod/iPhone touch-screens. Apple consider this their exclusive patented intellectual property and are keen to stop others from infringing it.

However, if the gear demonstrated at the recent E3 game symposium in the USA is any indicator, Apple's technology is already "so last year" and things are changing in the world of man/machine interface at an ever-increasing pace.

The big news was the Microsoft Kinect system, a powerful new interaction technique that actually relies on non-contact gestures to instruct a computer (in this case a games console) what to do.

Using a camera and some smart software, the Kinect system watches the player and converts not only their hand and arm gestures but even entire body movements into commands. Clearly this is great for gaming but it's already being touted as a "Minority Report" style interface for other IT applications.

The only problem being reported so far is that it doesn't work very well (or even at all) if you try to use it while sitting down in front of the screen. Apparently it currently relies on sensing the entire outline of a human figure in order to identify the various motions which are then converted to commands. Microsoft are said to be working on an enhancement to the software that will overcome this shortcoming.

So where to now for Kinect?

Will we see the cameras on our mobile phones being used to interpret gestures so that, in order to create email, select phone-numbers or perform other operations, we don't even have to touch the screen?

Will laptops recognize the frown on our faces when we make a mistake and automatically activate the "undo" function to restore a smile?

I guess that at this stage, nobody knows for sure -- but the incredibly short span of time has passed between the appearance of body-gesturing as science-fiction in Minority Report and the arival of Kinect hints that there may be even more exciting ways to interact with your computer, just around the corner.

Friday, June 11, 2010

The importance of your "private" email address

If you go back a few short decades, email addresses were things used by only a select few academics.

What point was their in having an email address when there was no ubiquitous communications network to transfer your messages anyway?

These days of course, everyone has at least one email address and usually several.

If you're smart you have public and private addresses. The public ones you use whenever you might expect them to be published or fall into the hands of spammers. Typically these are with web-based services such as GMail, Hotmail or Yahoo. Who cares if their servers are flooded with spam anyway?

Your private address(es) however, are things that should be guarded with much more care.

Private email addresses must be chosen carefully, so as to ensure their longevity and to avoid coinciding with guesses made by spammers. These addresses are best associated with your own domain name rather than your ISP or other domains that are beyond your control.

Many a Net-user has found themselves "disconnected" from regular contacts after an ISP folds or a minor web-based service has gone belly-up without warning -- effectively taking all the email addresses that depended on their domain name with them.

If you must use a web-based service for your "private" email addresses it's always a good idea to use cryptic or non-obvious names. Opting for an obvious name such as bob23@gmail.com will net you a mountain of spam almost immediately, as will john666@hotmail.com -- even if those addresses are available. Make your private email address long and cryptic. fftw99skvm.ppj@... is a good example, nobody's going to guess that one!

Now imagine you've gone to all the trouble of creating and protecting a nice "private" email address. The benefits are manifold...

Your intray is devoid of spam and you only get messages from people you consider to be important. What's more, you don't have to worry about genuinely important messages getting caught up in your spam filters, because you won't need spam filters.

Many important people (such as the rich and famous) rely on their private email addresses being kept secret. Without that secrecy, email would be virtually useless to them -- constantly clogged with messages from people they don't know and don't want to know.

Now imagine how you'd feel if, despite your careful planning and best efforts, that secret/private email address fell into the hands of hackers and became part of goodness knows how many spam-mailing lists.

Well spare a thought for iPad users in the USA who encountered exactly this problem last week when a stuff-up at US mobile provider AT&T saw a huge number of email addresses harvested by spammers.

How did they do it?

Well they discovered that by submitting the ID number of an iPad through a carefuly created script, the telco's system returned the email address associated with that iPad.

Amongst those who lost their highly valued "private" email addresses to spammers were the likes of film producers, Microsoft and Apple executives, publishers, high-ranking military officers and a whole host of others who'd rather not be named.

Right now, these folk are facing a tsunami of spam directed at those previously secret email addresses and chances are that they're having to create new addresses and advising all those affected about the change. It's a hugely expensive and annoying situation to be in.

If this proves anything, it shows that the more complex and sophisticated our technology becomes, the more fragile it can also be. Years of time and effort spent carefully protecting an email address from spam can be negated in the few milliseconds it takes for hackers to exploit a vulnerability in that complex technology.

Perhaps it also shows that being an early adopter is not without its risks.

In the meantime, it might pay to start creating a backup "private" email address, for when the inevitable happens.

Friday, June 4, 2010

Hi-tech weapons, a growing market

There's one thing that drives advances in technology faster than anything else -- it's armed conflict and, in particular, full-blown war.

Fields such as electronics and aviation absolutely rocketed ahead (no pun intended) during WW2 and the cold-war that followed.

Within the space of a few short years we saw the invention and widespread use of the jet engine, the invention and application of practical rocket technology, plus the arrival and development of radar and other radio-based navigational aids.

Even computing was given a giant kick in the pants with the need for devices such as the Colossus, arguably the world's first electronic computer.

It seems that even when funding dwindles for other forms of research, there's always plenty of cash available for the design, development and improvement of superior weapons.

So where are we up to with all these hi-tech weapons?

Well it's common knowledge that GPS and terrain-guided cruise missiles are currently the "state of the art" in arms technology. They can deliver anything up to and including a nuclear payload to targets which may be hundreds or even more than a thousand kilometres from the launch-point.

There are also a whole lot of new non-lethal weapons now being developed. These are specifically designed to deal with civil unrest, riots and non-military conflicts where disabling people is seen to be more acceptable than killing them stone dead.

One example of this is the Taser device now available to police. A small pyrotechnic charge propels a couple of darts, trailing wires, over a relatively short distance where they will puncture a target's clothing and skin to deliver a (usually) non-lethal electric shock. The original basic Taser model has now been augmented by hi-tech add-ons such as a laser-sight and an automatic video camera that records the events immediately leading up to and following its deployment.

Another proposed piece of hi-tech disablement weaponry are electromagnetic pulse weapons. These are designed, not to disable people, but vehicles and other electronic devices.

Right now the US government is considering a request to place EMP weapons along the USA's border with Mexico so as to try and control the flow of illegal immigrants. Other calls have been made for such devices to become a standard part of highway patrol equipment -- effectively replacing stop-sticks and other low-tech methods of disabling fleeing vehicles.

However, when it comes to controlling or disabling crowds of people rather than individuals, things get even more interesting, once a bit of high-technology is applied to the problem.

The Israelis are trialling a "sonic cannon" that uses intense soundwaves to disable and disorient crowds without causing permanent injury.

The US military have built and trialled what amounts to portable microwave-ovens that can bathe a crowd of enemy soldiers or rioters in a beam of radio frequency energy powerful enough to cause an intense burning sensation that forces them to flee the area.

These are just some of the latest hi-tech weapons systems that we know about. No doubt there are many more which remain classified and may be even more interesting in their application of science to the vexing issue of defending a nation or keeping the peace.

And to think, at one time, all we had to defend ourselves or attack others with were our fists and maybe a rock or tree-branch.

Isn't science wonderful?

Maybe this time, not so much.

Friday, May 28, 2010

The sky is falling (not today, but soon)

As a civilization, we have become incredibly dependent on a vast array of orbiting satellites.

In the span of a single lifetime (my own) we've gone from having no man-made objects in orbit to the point where now we're approaching saturation point and are hugely reliant on the services they provide.

It's not that the heavens are clogged with satellites, it's that they're becoming increasingly strewn with "space junk" over which we no longer have any control.

In fact, items of space junk outnumber in-use satellites by a ratio of 336:1

There is now growing concern that it may take just one major collision between objects of any significant size to trigger a massive chain reaction that could cripple many other communications and navigation satellites.

Although much of the "space junk" currently in orbit is very small, even something as tiny as a screw can knock out a large satellite if it impacts at the right point with enough velocity.

Any major impacts would be sure to generate a cloud of much smaller pieces, each of which has the potential to rip through other orbiting objects with more energy than the bullet from a high-powered rifle. An example of this was the collision last year between an inoperative Russian satellite and a communications satellite operated by the US company Iridium. Scientists estimate that these two objects broke into more than 1,500 fragments, which are still in orbit around the Earth.

The effect of a Chinese "satellite-killer" test against an orbiting bird produced an astonishing 150,000 pieces of space-junk.

The problem and potential disaster it may produce is even greater than you might think.

Should a chain-reaction of impacts be set off, not only would this eventually render large numbers of satellites inactive but it would also create a cloud of fragments that would continue to circle the earth at that distance for many years to come. This shell of shrapnel would effectively make it impossible to replace those damaged satellites, as the new units would also be subjected to impact from the orbiting fragments.

Fortunately, the issue does seem confined mainly to satellites in low-earth orbits (LEO) which are just a thousand Kms or less above the surface of the planet. There appears to be less concern being voiced over the risks associated with satellites lodged in geostationary orbits which are about 37,000kms above the surface. This is perhaps because the area of a sphere with such a markedly increased radius is much greater and therefore the odds of an impact is dramatically reduced.

Geostationary satellites also have zero relative motion to each other and therefore the velocity of any impact would be minimal.

So what happens if this chain reaction does occur?

Well the vast majority of communications satellites, tend to be geostationary and would thus avoid the resulting mayhem. However, the networks associated with satellite telephones, GPS navigation and some kinds of military surveillance could be dramatically impacted.

Even the loss of the GPS network would pose a major problem to the world's transport services, forcing airlines, shipping and other long-distance craft to revert to older, less accurate navigational systems.

There is a small chance that the international space station may also have to be evacuated for fear of damage from the resulting space debris.

So yes, one day, when you look up, you might find that the sky is falling and the first you may know about it is when the sat-nav system in your car goes deathly quiet.

Friday, May 21, 2010

A new display technology - the good and the bad of it

Within a few short years, that LCD display on your desktop, mobile phone, laptop and even your widescreen TV may be "so last week".

That's because a number of companies claim they're making huge advances in the area of flexible displays that are not only lighter, tougher and more efficient than LCDs, they'll also be able to roll up so that when not in use, take up far less space.

As mobile phone users are well aware -- when choosing a phone these days you really need to make a compromise between overall size and screen area. Some models, like the iPhone have opted to completely dispense with a conventional keyboard so as to gain maximum screenspace while others have resorted to sliding keyboards and all manner of other strategies.

But imagine a phone where the screen can be unrolled like a blind when needed -- or which automatically slides out of the phone as required. With this technology, the phone itself could be as small as a pen -- because some of these new flexible paper-thin displays are also touch-sensitive, eliminating the need for a keyboard.

A leading player in this field right now is HP who are chasing the lucrative military market with the promise of active maps. These maps appear as a simple sheet of translucent plastic but when activated, can display any of the topographical or photographic images stored in a tiny integrated processor system. One map to rule them all, so to speak.

Another clear application for this exciting new technology is in the burgeoning market for e-book readers...

Right now, even state-of-the-art readers like the Kindle and iPad are too heavy and bulky for many people to consider practical. Readers based on thin, flexible film displays may change all that, allowing a reader, complete with a huge library of titles, to be made almost as thin and light as just a few pages of the book it replaces.

The only downside of these new low-cost, thin-film, flexible display materials is that there are already plans afoot to harness the technology for advertising.

Eventually, many of the static posters and billboards we ignore every day as we go about our business will become highly animated distractions. It will be like having Flash applets littered throughout the real-world.

I really have to wonder if the benefits will outweigh the penalties of this technology :-)

Friday, May 14, 2010

Apple's two-fingered gesture to competitors

It seems that whenever new user-interface technology appears, lawsuits follow close behind.

Those who can remember when the first GUI interfaces began to appear on personal computers will recall that Apple and Digital Research became involved in a bit of legal biffo over who owned the intellectual property associated with a virtual trash-can.

Goodness knows how much money was spent on lawyers over the issue but now it all seems so trivial -- although at the time it was a deadly serious business.

Well the mouse-driven GUI has been "the" user-interface standard for desktop and laptop computers for decades but now there's a new kid on the block.

The advent of mobile phones and other small computing appliances with ever-larger touch screens has seen the introduction of "multipoint gestures" as a key component of modern user-interfaces.

Simply touching a virtual button on an LCD is no longer a powerful enough way to interact with these devices. Manufacturers have come up with clever stuff that represent pinching, parting and other gestures involving the use of more than one finger and movements beyond a simple touch.

But who owns these "gestures"?

Well (once again) we have Apple coming out with all (lawyers') guns blazing and filing suit against another major handset maker HTC.

Just as it did with the trash-can, Apple is laying claim to multi-touch gesturing as its own sacred and patented intellectual property, making it clear that (as far as the big A is concerned) nobody else can use this technology without their permission.

So why don't HTC simply come up with their own alternatives rather than knowingly infringe Apple's patents?

Well it seems that they can't actually come up with anything better and they know that if they're going to compete head on with Apple, they need to provide this level of intuitive interface.

In fact, the patenting of multi-point gestures could be a real roadblock for all the other mobile device-makers who will find themselves stuck with second-rate user-interfaces and reduced market share as a result.

Will Apple license this technology to competitors?

I doubt it. After all, why rent out the goose that is laying the golden egg? Better that they keep this stuff for themselves and reap the benefits that come from higher sales of their iPhones and iPads.

In fact, the "do or die" importance of multi-touch gesturing is even forcing other big-name companies such as Palm and Motorola are said to be contemplating their own multi-point
gesturing interfaces, possibly risking legal action from the Apple in the process.

However, all hope is not lost.

Back in the early days of the PC GUI, Apple did cross-license technologies with Microsoft which explains why Windows ended up with a trash can.

The real question is whether HTC, Palm, Motorola or any of the other mobile device makers has anything Apple might want in return for such a license. If not, Apple may be quite prepared to give a different kind of two fingered gesture to its competitors when it comes to this technology -- and, as they've proven in the past, they're quite prepared to back that up with legal action.

Of course if you think you've developed something better than multi-point gesturing as a user-interface to small hand-held devices, you may be sitting on a goldmine.

Friday, May 7, 2010

iPad in, Netbook out

If reports out of the USA are correct, it would appear that buyers may find themselves able to score some pretty hot deals on Netbooks in coming months.

Why?

Well apparently a significant (44%) of those who were previously planning to buy a Netbook have instead opted for the iPad as a "sexier" alternative.

Even though the iPad lacks true multi-tasking, has some wireless connectivity issues and locks its purchasers into Apple's software, users are flocking to it in droves.

Whether it's just a fashion trend or an acknowledgment that the iPad's simple, intuitive user-interface and 10-hour battery life are ultimately more important to people than anything the Netbooks have to offer is a subject for debate.

When I first saw the iPad promotional video and read the specs I knew this was going to be a massively popular product and was surprised that so many commentators said it wouldn't sell because nobody "needed" it.

Nobody "needs" HD TV either, but it sells like hotcakes, because it's what people "want" that empties their wallets just as much as what they need.

But why is the iPad selling in such huge volumes? (over a million shipped so far)

Because, unlike a Netbook or laptop, it isn't a computer -- it's a tool, and a very well designed tool at that.

iPad users don't have to worry about what OS they're using or what anti-virus software's installed. They don't need to concern themselves about compatibility with this application or that peripheral.

Rather than all that geek-stuff, they've purchased a shiny rectangle of fun that lets them read e-books, watch movies and surf the web in a most hassle-free way.

It's not a generic untailored product that attempts to be all things to all people -- it's a slick "appliance" that just does a few things really, really well.

If you're one of the millions who really want an iPad, that's probably good and bad news.

Good, because it's an endorsement that you'll be getting a pretty cool piece of kit.

Bad, because it means that, with demand continuing to remain high, prices will not drop and you may find yourself waiting longer than you'd hope to get your hands on one.

If you're a Netbook lover or someone who likes the challenge of trying to get your computer to do as many different things as humanly possible (all at the same time) then you'll also be happy.

Should the reports be true and the market be turning away from Netbooks, demand will fall and, sure as eggs and, with falling demand comes falling prices.

Thank you Apple for making life better for iPad and Netbook users alike.