Sunday, January 31, 2010
How would these affect you? This article will compare the two cloud storage offerings from price, speed, usability, service level agreement and to developer support.
Tuesday, January 26, 2010
However, just like when you are driving a car on a daily basis but sometimes still curious about what is under the hood and check the oil level on weekends, I am curious about how Azure Storage works.
Monday, January 18, 2010
But what if you’d like to enjoy the benefits of becoming an Azure blob storage user without writing any code? What if you’ve already stored terabytes of information with another provider, like Amazon S3? Is there an out of box solution that can help? As you might guess from the boldness of the question, the answer is yes.
Tuesday, January 12, 2010
Cloud Storage Services
Cloud Storage Services were relatively unknown until Amazon mass produced the now well-known S3 (Simple Storage Service). S3 may not be the first cloud storage service but it is among the first for the public with billions of objects stored. Now we are seeing other cloud storage vendors coming out, such as Nirvanix, Windows Azure, EMC Atmos Online/AT&T, Parascale, Box.net and etc. Google has long been in the cloud services business with the GMail service, the Google Apps Engine, the Picasa photo service and etc, with Google storage backing them up.
Desktop Access To the Cloud
A common theme of cloud storage services is that they all provide some API (Application Programming Interface) over HTTP. If the Cloud Storage were to be as popular today as the FTP for the past decade, it needs to have a desktop client that delivers the remote storage to the user's desktop, preferably as a Virtual Network Drive. After all, nobody wants to code a Ruby & Rail or .NET application just to get some simple transfer done.
New Challenges - So Many Different APIs
It wasn't like FTP when all the FTP Servers in the world talking the same FTP Protocol. Since the Cloud Storage Services are so new, each vendor pretty much defines its own API. A common protocol might appear in the future such as the Simple Cloud API effort. However, the state of the art is that every vendor is divided so to deliver a quicker to market solution. As a result, every cloud storage service has a different API.
Basically, the Cloud Storage APIs can be categorized into several groups.
(1) REST over HTTP/HTTPS
Amazon S3, Windows Azure, Nirvanix, EMC Atmos Online, Google Data API, Box.Net
- Pro: Simple, Easy to achieve application compatibility on the server end. Push the complexity to the client end if so desired.
- Con: Needs developer support to write new applications for these APIs .
Parascale, Box.Net and other vendors
- Pro: Existing Desktop Client Support
- Con: WevDav protocol is too big and too complicated. The Server side keeps the complexity. New applications written will have to take on the same complexity.
Some established companies simply repackage existing products with a cloud label, with FTP access.
Cloud Vendors' Approach To the APIs
- Big companies such as Microsoft pushes for the simple REST API from day 1 with lots of developer supports.
- Smaller companies may have dual-protocol support in both REST and Webdav.
- easier adoption with existing WebDav client early on.
- later migrate to a simpler REST API for internal growth and scalability reasons.
- The simpler the API, the easier to manage, both for the vendors and for the developers writing apps. Simplicity attracts developers.
Gladinet's effort is to combine all these different cloud storage services and bind them to the Windows Operating System. The end result is a ubiquitous client that turns the Windows Explorer into a Storage Portal. There is a virtual network drive with a drive letter. All the supported cloud storage services are mountable within this network drive. (You can find more details of the functionality in the reference)
The speed of the internet determines the use case. For example, if everyone were still using Dial-Up network, nobody is talking about cloud storage.
Use Case I - Direct Random Access
Broadband is the norm nowadays, with speed varies around 100KB/S - 800KB/s. A response time of < 5s is required for good usability for direct access. This means direct random access to the cloud files with 500K-4M in size will be very usable. As a simple experiment, a directory listing of the My Documents folder on my PC reveals 237 Files with 46M Bytes total. I have on average 200KB per file for the files that I used the most often.
Use Case II - Online Backup
At current broadband speed (slower than the 802.11B wireless network), a bigger use case is online backup, which is write once and seldom read. At this stage, the cloud storage can't replace either the network attached storage or the local hard drive because of the speed but it is perfect for backing up stuff.
Use Case III - File Server with Cloud Backup
An interesting twist is to combine the Use Case I and Use Case II by moving the access point from user's desktop to a file server. Users can do direct random access on a network server while the network server is backed up by online storage. As a second product coming out of Gladinet, the Cloud Gateway fits this use case.
With fiber becoming the next wave of the broadband, the speed of the internet will catch up with the local wireless network, we will see more use case in the network attached storage becoming cloud aware. We will see more and more people using Cloud Storage in a supportive role to the main local hard drive while the local PC, the local hard drive all increase their speed too.
As of this writing, Seagate i365 just announced the support of Cloud API and Google Docs opens up with 1G of free storage. With the speed of the internet increases, you will see more and more vendors adopting the cloud storage paradigm. You will see more and more developers writing applications towards the cloud services. Eventually, the internet is the computer.
1. Manage Azure With Ease - An overview of basic functionalities using Azure as an example
Friday, January 8, 2010
This article will show you how to backup your Google Docs files on a daily basis to another cloud storage.
First you need to install Gladinet Cloud Desktop and map in your Google Docs and other cloud storages you have. In the following picture, I have Azure Storage, Amazon S3, a FTP Server and Google Docs all mapped in the Gladinet drive. (I have Synaptic Storage too but it is not configured yet)
From the System tray menu, you can open Create Google Docs Backup Task.
A Backup Wizard will appear, asking you where you want to backup to. Select Azure Blob Storage.
There are two modes of backup, one is a smaller scale by using a specific folder (Drop Folder) inside your Google Docs account. The other one is a bigger scale by backing up everything from the root folder.
Since backup from the root is easier to understand, I will pick the other option – Use Drop Folder here.
Inside my Google Docs account, there is a new empty folder Backup To [Azure Blob Storage]. Any file that is saved to this folder from Google Docs will be periodically backed up to Azure Storage.
If you pick the other option, Backup All Files, all the files in the Google Docs will be backed up to Azure Storage periodically.
Wednesday, January 6, 2010
(A) 500M free storage for everyone
Tuesday, January 5, 2010
Yesterday, Windows Azure was transitioned from Public Preview mode to Full Production mode. It took me a while to understand how to setup the Azure account and start using the Azure Storage. I will share the steps here in this article.
Hopefully, in 3 big steps, you will be using your Azure Blob Storage from Windows Explorer and smooth sailing thereafter.