Random Ramblings About Making Games and Stuff from Cloud

Archive for the ‘Cloud Computing’ Category

Things I learned building OData service to Azure and using WP7 as client

I wanted to build an Windows Phone 7 app called “Stuff I Like”. App would have OData service running in Azure and client that would cache and sync to that service. In previous post I wrote about stuff that I learned building the WP7 app. In this part I will reveal my findings on the service side of things. In the next post I will dwell into Authentication side of the app.

If you are planning to add Federated Authentication using Azure Access Control service then you might want to start by building authentication first. For me it was waaaay easier to add normal aps web page add authentication to that and add WCF data service after login worked. For you I would recommend downloading Azure Training Kit: http://www.microsoft.com/en-us/download/details.aspx?id=8396 and completing Lab Exercise http://msdn.microsoft.com/en-us/identitytrainingcourse_acsandwindowsphone7.aspx

Now back to business.

When I followed this Data service in the cloud http://msdn.microsoft.com/en-us/data/gg192994. I decided to design the data model using visual studio design tool. After playing a round with the tool I managed to “draw” data model and after that it was just a matter of syncing it to Database.

This is how I draw the data model

After syncing above data model into database, setting up the service and running it I quickly found that my WCF Data Services where not working at all. Message “The server encountered an error processing the request. See server logs for more details.” was shown to me quite frequently. Well “the bug” was quite simple to fix and these debugging instructions helped me a lot http://www.bondigeek.com/blog/2010/12/11/debugging-wcf-data-services:

  1. A missing SetEntitySetAccessRule
  2. A missing pluralisation on the SetEntitySetAccessRule
  3. A missing SetServiceOperationAccessRule

After Debugged EntityAccessRules with simple “*” allowRead just to check that I had not made a typo I quickly found out that I indeed had 😦 So I after I fixed a typo in EdmRelationshipAttribute and it caused the exception. After that stupid mistake things started to look better.

If you need more instructions on how to turn on debugging messages then just follow these instructions:

This is how my service looked after I finally got it running

After I managed to get service defined and running I took a second to make OData feed a bit more readable so that you can consume your OData feed using Explorer and other browsers. http://msdn.microsoft.com/en-us/library/ee373839.aspx

This is how OData feed looks before you tweak it a bit.

For some reason I managed to first add “m:FC_TargetPath” and similar properties to wrong xml element. So make sure you scroll down the file and add it to correct place 🙂

This is how OData feed will look when you tweak it a bit

Another thing that took me couple of hours to figure out was that Explorer does not show all OData results in consistent way. So before you start heavily debugging check the returned html page source code and you should see expected result in XML format. OR you could use another browser. For example this call did not seem to return anything http://localhost:57510/WcfDataServiceStuffILike.svc/StuffSet(guid’cf1bfd2f-99f3-4047-99f8-22bc1aad1b99′)/GategorySet until I checked the source code of this page.

So that’s it. Using Visual Studio this was quite easy and I actually spent most of my time figuring out why some configuration did not work than making the code.  This might be due to the fact that I have “unclean” dev environment or I made lots of changes to above demos while I followed them. This was mainly due to the fact that I wanted to build my own app and not simply type in demos and labs.

I bet If you build your dev environment correctly and follow the labs and demos to the letter you won’t see as many problems that I witnessed. But where is the fun in that 😉

Azure SQL backup using Red-Gate Cloud Services

I stumbled upon Red Gate Azure Backup service: https://cloudservices.red-gate.com/

After using the beta for couple of weeks I am really impressed. Everything is easy, UI is intuitive and you don’t need lots of configuration to make this work. You can setup basic daily backup work in just matter of minutes. If you got Azure storage or Amazon S3 account ready. If not then it still does not take long. You spend most of the time digging up usernames and passwords not figuring out how things work! Splendid.

So let’s go through how to do daily Azure SQL backup into Amazon S3 storage.

Red-Gate Cloud Service Dashboard is really clean and simple. Just select the action you want to perform.

First after registering to Red-Gate Cloud Services and login select “Backup SQL Azure to Amazon S3” from Dashboard.

Just fill in the Azure SQL credentials and Amazon S3 keys.

Secondly just fill in Azure SQL server name xxxxxxx..database.windows.net, login credentials to that DB and press refresh button next to Database drop down. Select database that you want to backup. Next click on the “AWS Security Credentials” link and login to your Amazon AWS account. You will be taken directly to place where you can find the keys.

"Access key id" goes to "AWS Access Key" and "Secret Access Key" is shown after pressing "show" link in Amazon AWS.

 Note that you need S3 Bucket. If you don’t have one you can create it from here https://console.aws.amazon.com/s3/home I will not explain how that is done, but it is really easy. After you have inserted Amazon keys you can press the refresh button next to Bucket dropdown in Red-Gate UI. Just fill in the name that you want your backup file to have. Don’t worry about the day stamp because this tool will add date at the end of the filename. Press “continue” button to select scheduling options or backup now.

Just select when you want your daily backup to run or just press "Backup now" button.

 Next just fill in exact time you want your backup in 24h format, select time zone, select week days when to run this and press schedule button. You can also just press backup now button for one time backup. Note that there is also a monthly backup option.

You should see your scheduled backup in next screen. When your first backup is done it appears here under history header.

Next you are taken into schedules and history view. Here you can view details of the upcoming backup operation, cancel it or invocate backup now. Just move your mouse over the upcoming event.

Just move mouse on top of scheduled backup job and you can cancel job or invocate it to happen right now.

When backup job has been successfully completed you will receive an email and a “log” line will appear in “schedules and backup history”.

If job is successful or not it will appear here under history title

 You can view details of executed jobs by moving mouse on top of history line. If you backup job has run into errors you will see an error triangle as well as receive error message via email. Email also contains a direct link to that specific History log line. Neat!

If you backup job has exceptions it has warning icon in history. Details link will contain error message of what went wrong.

 Clicking on the link on the email will direct you to same view as clicking “details” next to completed job.

Here you can read throught what happened while executing backup. If you ran into errors they are also present. Just scroll down from the bar.

If you got your backup run cleanly you should see the created pacbac file in Amazon S3 bucket.

Backup file is safe and sound in Amazon S3

Note that you can just as easily use Azure Storage services or use FTP to upload the backup to you own backup architecture.

There are couple of missing features that I would like to see. For example similar choice than in Red-Gate backup tool to first create copy of database (for transactional safety) and ability to encrypt the backup file before it is uploaded into Amazon or Azure storages. But event without these small features this is still awesome!

Carefree clouding!

A Lot of Talk About Vendor Lock-in

Still free or did you lock yourself already?

I wrote this post originally to my company blog.

In the press and in many online forums there’s a lively discussion about so-called vendor lock-in, a discussion about customers facing a situation where it is very difficult to replace the existing product with a new one, or the replacement costs are prohibitively high.

Indeed, vendor lock-in deserves your attention, it is good to stop and think before making software purchasing decisions. Let me list some common questions that may come into your mind:

  • What if it is very difficult or impossible to get my own data out of the system?
  • What should I do in case where my software provider drastically raises the license fees?
  • What if my present provider stops supporting the product I’ve bought when a new version of it is coming to the market?
  • What if I’ll find that another provider offers much nicer and suitable solution for me, and I wish to export all my data into a new one instead?
  • What if I am forced to take the new version of the present solution into use, will it support our good old way of doing things – or must the entire organization AGAIN learn to use a new system?
  • Must I once again survive a long and laborious implementation project?
  • Do I make myself a ‘hostage’ by taking this software in use?
  • What if the terms of use of a software component become indefensible and must we migrate from one software to another?

All these above-mentioned points lead to somewhat unpleasant situation. And no one seems to have solution to it. Googling (or binging) the notion ‘vendor lock-in’ does not bring up any solutions; only fears, threats, and horror stories. Vendor lock-in questions and fears have recently been rising into discussions thanks to the strong presence of various SaaS solutions and cloud computing technologies.

However the fact is that vendor-lock in can’t be entirely avoided, it always occurs – to some extent.I mean always. Both with Cloud/SaaS and with in-house software. But there’s a difference between these deployment methods.

The data of the on-premise and in-house software exists on your own servers and databases. In a way, it can feel safe, but the truth is that very often this data is unusable without the (internal) consulting project in which IT professionals are transferring, converting, and creating data for the next system. As you might guess, after that kind of approach it is common that a rather heavy implementation project starts; installations, testing teams, training sessions, and eventual renewal of business processes.

All this resulting in a situation where the end result can be admired after 8 months project…and having an effect on the company results after a year! Way too slow.  And oops, did the business environment change during that time?

I dare also to claim that on-premise software is most often chosen without 100% knowledge of the data portability. In case that is checked out, is it often done by asking the vendor sales person “And the data can be easily imported?” Of course the answer is ‘yes’ accompanied with lots of reassuring in form of head nodding, and no one really checks that up in practice – before it’s too late to react on that.

SaaS services and most of the cloud providers do not require any technical installations; hence the implementation project is much slimmer. The ‘worst’ workload might occur with the training part.  However the best SaaS solutions are so simple to use that a massive training program is often not needed.

I’d say it is quite rude to frighten up customers by speaking ill of the vendor lock-in with SaaS and cloud computing providers, if one offers on-premise software as a solution. There is a risk that you have locked in much more, as part of your IT infrastructure, process development, versions of your desktop clients and even operating systems become dependent to the on-premise software. The license prices of a ‘normal’ software could easily bounce up, as of any vendors.  Just ask one ERP client 🙂

The vendor lock-in of the SaaS vendors is at its highest only as deep as with a ‘normal’ software. But most often less deeper and less risky, I claim.

If you want to minimize your vendor lock-in, it is not about whether you take an in-house, on-premise or SaaS product, but about which SaaS product you choose.

Before headlong SaaS decision you should at least browse through the below-mentioned articles. These might help you to better understand your options.

When you’re in the cloud, you’re ordering a cup of coffee, not the coffee maker.

In English:

http://info.isutility.com/bid/63587/Top-Cloud-Computing-Implementation-Concerns-and-how-to-address-them

In Finnish:

http://blog.sopima.com/2011/04/11/sekaisin-pilvipalveluista/

http://soft.utu.fi/saas/

Why SLA does not make sense in the Cloud?

What do you really want to accomplish with Service Level Agreement (SLA)? To punish or to get the best support available, as soon as possible? With the traditional on premise software if there is a problem you are pretty much all alone with it.The time and the money between binary hitting the fan and fan being fixed is solely coming from your pocket. In the traditional on-premise or dedicated server software environment SLA makes sense. You need to have some leverage and certainty that your software provider is at least mildly interested in fixing your problem.

When “Cloud Computing” is hit with speed bumps the whole Internet is holding its breath. Latest example being Amazon incident on April 21 2011. If SaaS service is not up and running SaaS firm is losing lot of money and very fast. Most of the SaaS firms have monthly recurring revenue model and customers can cancel subscription within one month notice. This means that customers can vote with their wallet. So you can be sure that a SaaS firm gives its fullest attention in order to get their SaaS service up and running as soon as possible. If your provider has fully Clouded its technology (so-called multitenancy), your application instance will be fixed as soon as service is fixed. No one is getting special treatment. Not good or bad. So there is no need to be fearful about your account is not running while others are.

Storm, calm or no cloud? You have no idea.

You should be able to see if your Cloud is in a storm or calm.

With a proper SLA the damages and indemnification are somehow fixed to the amount of money you pay for the software, services included. With SaaS firms your monthly – and even yearly fees – are quite small amount of money and so are the potential liquidated damages. This means that, for example, 10% indemnification of the subscription value is merely a nominal sum. For example, if a SaaS service would cost you 59$ yearly subscription it would entitle you 50 cent compensation for one month downtime. And before you try to negotiate higher indemnification, talk to you own lawyer and ask if you should sign a contract that has liquidated damages clause over 100% of contract value. Next imagine you are asking for it from SaaS firm? And guess the response. My point is that there is no realistic way to imagine a SLA between you and SaaS provider that would have real monetary indemnification. The real penalty for a SaaS provider is in the form of loss of income, increased churn, and negative publicity. Any serious SaaS procvider will do everything to avoid this.

All is well in the cloud

Seek for transparency instead of indemnification.

What I am trying to say is that instead of asking what SLA levels you receive and what kind of compensation is possible, ask what your SaaS provider is doing to minimize the downtime. It does not make sense to try to get SLA agreement as tight as possible. It makes more sense to make sure that provider is “all in” with the cloud. If service you are using truly has significant monetary value for you provider it will make sure that it will run as smoothly as humanly possible.

Make them prove that they know what they are doing, but not with a SLA. Ask about security, availability, recovery and how you can monitor uptime. For example Azure (Service Dashboard) , AzureWatch and Sopima Oy provide RSS feeds to its customers to give transparency of its service levels.

Focus on finding the signs of preemption, transparency and security – instead of indemnification in the SLA.

PS. If you want to know that 10 questions you should ask from your SaaS vendor and what are the correct answers go here.

Why Cloud Services are bought behind IT’s back?

A cloud worker in his thoughts, yes, that's me thinking about the shadow IT.

I wrote this post originally to my company blog.

A common situation today is that the business units are purchasing SaaS services, secretly without knowledge and/or acceptance of the IT department. The reasons for are many, but at least that they:

  1. do not believe that IT will give them a permission to order the service,
  2. are afraid of that IT will offer some custom-tweaked Intranet setup,
  3. will get “similar” but inferior service, eventually when IT gets time to do it.

How did we reach this point?

I would say that lack of understanding and the illusion of total control are to blame. Similar findings can be found on a recent publication of University of Turku, SaaS Handbook, consultant company Sulava’s Marco Mäkinen’n video about the shadow IT, and in an article in Tietoviikko (the leading Finnish IT professional paper). I’m sorry that previous references are all in Finnish so you just have to take my word for it or use Google translator 🙂

I’ve been thinking this issue a while and my opinion is that the biggest problems are following:

Firstly, IT says “No” because it does not understand the business problem at hands or the benefits that the chosen SaaS solution would offer. As IT professional myself, I too used to start with listing problems and risks whenever a new service was presented to me. I did not believe that the solution would solve the issue because I just simply did not understood how much this would actually help the business units. Ok, I did some soul-searching and found a reason for My Inner Problem Troll. I have had such a bad experience with implementation of new software services: sales cycle is long, price is high, laborious server procurement and installation, mandatory customization and integration projects, bad end-user training experience, and poor usability. And very often after the software implementation project nobody wants to actually use the product – and everybody is feeling sick and tired of it. These experiences lead further to the problem number two.

With an engineer style, we often tend to overdo things and aim for perfection and making everything ready at once. Very typical at least in Finland, Nokialand. The new service must be fully integrated to other systems, the information must not be in silos in the different systems, the service must perfectly adapt to our processes (how outdated they ever are…), and on top of that it must adapt to changing business needs, and finally, the implementation must happen to all personnel at once. Phew. Everything has to be one complete solution.

When the chosen service is finally in production, the business need may have changed and therefore nobody wants to use the service. Summarized; all the integrations, customizations and implementation took too much time and way too much money.

The price of the traditional software product will easily lead to the problem number three.

Does this sound familiar? Because the purchased product was expensive and the implementation was painful, you desperately want to use the licenses and servers for almost any problem you encounter – even if there are better (and less expensive) solutions available. In other works, it is easy to end up trying to save money by using one single product to solve the various needs.

I dare to argue that by doing this way you will end up saving money from the wrong end. As an example: how many of you organize events by email and Excel sheets? Did you know that there are alternatives? If you count the hours you use for excel work and emails and compare this to services like Lyyti (this is a Finnish service) we don’t need to be a Nobel awarded mathematician to understand a) the cost savings and b) increased value to the event participants.

How did we end up here?

The need to control everything, futile (and expensive) perfectionism, fanatic avoidance of data silos and emphasis on the potential risks instead of the benefits have led to a situation, where SaaS services are purchased without the acceptance of the IT management.

So what can we do about it? A lot! The solution is not to add more technology or interfaces.

Software production companies have already accepted that agile development and early trials are the way to get a product that is good enough for the actual business needs and the schedule. So why would IT department want to act in a way that is suspiciously similar to the waterfall method? I would highly recommend added agility and demo mentality are also tested in the IT departments – SaaS services are very easy to test and experiment!

If a SaaS product does not meet your needs, it is really easy to skip it and test another one. The testing of new services costs way less than you think and it gives you possibilities to react to actual business problems – and not only to the problems that you can afford, have time to solve or know how to fix!

Researcher Antero Järvi from University of Turku said this to me when I asked about how to integrate SaaS services to the rest of the applications: “Don’t integrate anything before you use the service for a while and evaluate if you even want to continue to use the service.”

%d bloggers like this: