Random Ramblings About Making Games and Stuff from Cloud

Posts tagged ‘SaaS Service’

Azure SQL backup using Red-Gate Cloud Services

I stumbled upon Red Gate Azure Backup service: https://cloudservices.red-gate.com/

After using the beta for couple of weeks I am really impressed. Everything is easy, UI is intuitive and you don’t need lots of configuration to make this work. You can setup basic daily backup work in just matter of minutes. If you got Azure storage or Amazon S3 account ready. If not then it still does not take long. You spend most of the time digging up usernames and passwords not figuring out how things work! Splendid.

So let’s go through how to do daily Azure SQL backup into Amazon S3 storage.

Red-Gate Cloud Service Dashboard is really clean and simple. Just select the action you want to perform.

First after registering to Red-Gate Cloud Services and login select “Backup SQL Azure to Amazon S3” from Dashboard.

Just fill in the Azure SQL credentials and Amazon S3 keys.

Secondly just fill in Azure SQL server name xxxxxxx..database.windows.net, login credentials to that DB and press refresh button next to Database drop down. Select database that you want to backup. Next click on the “AWS Security Credentials” link and login to your Amazon AWS account. You will be taken directly to place where you can find the keys.

"Access key id" goes to "AWS Access Key" and "Secret Access Key" is shown after pressing "show" link in Amazon AWS.

 Note that you need S3 Bucket. If you don’t have one you can create it from here https://console.aws.amazon.com/s3/home I will not explain how that is done, but it is really easy. After you have inserted Amazon keys you can press the refresh button next to Bucket dropdown in Red-Gate UI. Just fill in the name that you want your backup file to have. Don’t worry about the day stamp because this tool will add date at the end of the filename. Press “continue” button to select scheduling options or backup now.

Just select when you want your daily backup to run or just press "Backup now" button.

 Next just fill in exact time you want your backup in 24h format, select time zone, select week days when to run this and press schedule button. You can also just press backup now button for one time backup. Note that there is also a monthly backup option.

You should see your scheduled backup in next screen. When your first backup is done it appears here under history header.

Next you are taken into schedules and history view. Here you can view details of the upcoming backup operation, cancel it or invocate backup now. Just move your mouse over the upcoming event.

Just move mouse on top of scheduled backup job and you can cancel job or invocate it to happen right now.

When backup job has been successfully completed you will receive an email and a “log” line will appear in “schedules and backup history”.

If job is successful or not it will appear here under history title

 You can view details of executed jobs by moving mouse on top of history line. If you backup job has run into errors you will see an error triangle as well as receive error message via email. Email also contains a direct link to that specific History log line. Neat!

If you backup job has exceptions it has warning icon in history. Details link will contain error message of what went wrong.

 Clicking on the link on the email will direct you to same view as clicking “details” next to completed job.

Here you can read throught what happened while executing backup. If you ran into errors they are also present. Just scroll down from the bar.

If you got your backup run cleanly you should see the created pacbac file in Amazon S3 bucket.

Backup file is safe and sound in Amazon S3

Note that you can just as easily use Azure Storage services or use FTP to upload the backup to you own backup architecture.

There are couple of missing features that I would like to see. For example similar choice than in Red-Gate backup tool to first create copy of database (for transactional safety) and ability to encrypt the backup file before it is uploaded into Amazon or Azure storages. But event without these small features this is still awesome!

Carefree clouding!

SQL Azure monitoring with Cotega

I just started to use this SQL Azure and SQL Server Monitoring tool http://www.cotega.com/ and I must say that it is exactly what I was looking for in a easy to use monitoring tool. I don’t need to be complex, feature rich and massive monitoring tool. I only need a alert when database is not accessible, if performance seems to be below average or there is a spike in the usage. Service is stil in beta but it looks really promising.

Just click "add SQL Azure database"

Setting up the database connection was easy. After login you are presented with Dashboard view. Just click “add SQL Azure database and fill in the database address, username and password to the popup. Finally press “Add database button”.

Fill in the address and login details.

You can add notifications to you databases by going to Notifications view and pressing “Create Notification” button.

To add new notification just press "Create Notification"

Just fill the name of the notification, select monitoring rule from “select what you want to monitor”, database and the polling frequency.

Fill in the details and select monitoring target.

After you have selected monitoring target you can select when to create a notification. In this case I have selected “when connection fails”. After this you can fill in a email address and even select a stored procedure to be run. But with connection fails case that does not make sense :). Finally press “Add Notification”.

Add email address and select stored procedure if you need one

There you go. Just start waiting those notifications to kick in.
There is a nice feature in the notifications: When you start to receive lots of them for example conserning “connection fails” you can temporarily disable the notification directly from the notification.

When you get many same type of Notifications you have option to disable the Notification directly from the email.

You can check the logs for specific notification to make sure that rule works.

You can see how rules are being evaluated.

Also you can view this info in a report view for performance ananlysis purposes.

Its easier to spot performance problems from visualized reports.

Very potential neat litle monitoring tool for those who don’t need complex solution for simple problem!

Why SLA does not make sense in the Cloud?

What do you really want to accomplish with Service Level Agreement (SLA)? To punish or to get the best support available, as soon as possible? With the traditional on premise software if there is a problem you are pretty much all alone with it.The time and the money between binary hitting the fan and fan being fixed is solely coming from your pocket. In the traditional on-premise or dedicated server software environment SLA makes sense. You need to have some leverage and certainty that your software provider is at least mildly interested in fixing your problem.

When “Cloud Computing” is hit with speed bumps the whole Internet is holding its breath. Latest example being Amazon incident on April 21 2011. If SaaS service is not up and running SaaS firm is losing lot of money and very fast. Most of the SaaS firms have monthly recurring revenue model and customers can cancel subscription within one month notice. This means that customers can vote with their wallet. So you can be sure that a SaaS firm gives its fullest attention in order to get their SaaS service up and running as soon as possible. If your provider has fully Clouded its technology (so-called multitenancy), your application instance will be fixed as soon as service is fixed. No one is getting special treatment. Not good or bad. So there is no need to be fearful about your account is not running while others are.

Storm, calm or no cloud? You have no idea.

You should be able to see if your Cloud is in a storm or calm.

With a proper SLA the damages and indemnification are somehow fixed to the amount of money you pay for the software, services included. With SaaS firms your monthly – and even yearly fees – are quite small amount of money and so are the potential liquidated damages. This means that, for example, 10% indemnification of the subscription value is merely a nominal sum. For example, if a SaaS service would cost you 59$ yearly subscription it would entitle you 50 cent compensation for one month downtime. And before you try to negotiate higher indemnification, talk to you own lawyer and ask if you should sign a contract that has liquidated damages clause over 100% of contract value. Next imagine you are asking for it from SaaS firm? And guess the response. My point is that there is no realistic way to imagine a SLA between you and SaaS provider that would have real monetary indemnification. The real penalty for a SaaS provider is in the form of loss of income, increased churn, and negative publicity. Any serious SaaS procvider will do everything to avoid this.

All is well in the cloud

Seek for transparency instead of indemnification.

What I am trying to say is that instead of asking what SLA levels you receive and what kind of compensation is possible, ask what your SaaS provider is doing to minimize the downtime. It does not make sense to try to get SLA agreement as tight as possible. It makes more sense to make sure that provider is “all in” with the cloud. If service you are using truly has significant monetary value for you provider it will make sure that it will run as smoothly as humanly possible.

Make them prove that they know what they are doing, but not with a SLA. Ask about security, availability, recovery and how you can monitor uptime. For example Azure (Service Dashboard) , AzureWatch and Sopima Oy provide RSS feeds to its customers to give transparency of its service levels.

Focus on finding the signs of preemption, transparency and security – instead of indemnification in the SLA.

PS. If you want to know that 10 questions you should ask from your SaaS vendor and what are the correct answers go here.

Why Cloud Services are bought behind IT’s back?

A cloud worker in his thoughts, yes, that's me thinking about the shadow IT.

I wrote this post originally to my company blog.

A common situation today is that the business units are purchasing SaaS services, secretly without knowledge and/or acceptance of the IT department. The reasons for are many, but at least that they:

  1. do not believe that IT will give them a permission to order the service,
  2. are afraid of that IT will offer some custom-tweaked Intranet setup,
  3. will get “similar” but inferior service, eventually when IT gets time to do it.

How did we reach this point?

I would say that lack of understanding and the illusion of total control are to blame. Similar findings can be found on a recent publication of University of Turku, SaaS Handbook, consultant company Sulava’s Marco Mäkinen’n video about the shadow IT, and in an article in Tietoviikko (the leading Finnish IT professional paper). I’m sorry that previous references are all in Finnish so you just have to take my word for it or use Google translator 🙂

I’ve been thinking this issue a while and my opinion is that the biggest problems are following:

Firstly, IT says “No” because it does not understand the business problem at hands or the benefits that the chosen SaaS solution would offer. As IT professional myself, I too used to start with listing problems and risks whenever a new service was presented to me. I did not believe that the solution would solve the issue because I just simply did not understood how much this would actually help the business units. Ok, I did some soul-searching and found a reason for My Inner Problem Troll. I have had such a bad experience with implementation of new software services: sales cycle is long, price is high, laborious server procurement and installation, mandatory customization and integration projects, bad end-user training experience, and poor usability. And very often after the software implementation project nobody wants to actually use the product – and everybody is feeling sick and tired of it. These experiences lead further to the problem number two.

With an engineer style, we often tend to overdo things and aim for perfection and making everything ready at once. Very typical at least in Finland, Nokialand. The new service must be fully integrated to other systems, the information must not be in silos in the different systems, the service must perfectly adapt to our processes (how outdated they ever are…), and on top of that it must adapt to changing business needs, and finally, the implementation must happen to all personnel at once. Phew. Everything has to be one complete solution.

When the chosen service is finally in production, the business need may have changed and therefore nobody wants to use the service. Summarized; all the integrations, customizations and implementation took too much time and way too much money.

The price of the traditional software product will easily lead to the problem number three.

Does this sound familiar? Because the purchased product was expensive and the implementation was painful, you desperately want to use the licenses and servers for almost any problem you encounter – even if there are better (and less expensive) solutions available. In other works, it is easy to end up trying to save money by using one single product to solve the various needs.

I dare to argue that by doing this way you will end up saving money from the wrong end. As an example: how many of you organize events by email and Excel sheets? Did you know that there are alternatives? If you count the hours you use for excel work and emails and compare this to services like Lyyti (this is a Finnish service) we don’t need to be a Nobel awarded mathematician to understand a) the cost savings and b) increased value to the event participants.

How did we end up here?

The need to control everything, futile (and expensive) perfectionism, fanatic avoidance of data silos and emphasis on the potential risks instead of the benefits have led to a situation, where SaaS services are purchased without the acceptance of the IT management.

So what can we do about it? A lot! The solution is not to add more technology or interfaces.

Software production companies have already accepted that agile development and early trials are the way to get a product that is good enough for the actual business needs and the schedule. So why would IT department want to act in a way that is suspiciously similar to the waterfall method? I would highly recommend added agility and demo mentality are also tested in the IT departments – SaaS services are very easy to test and experiment!

If a SaaS product does not meet your needs, it is really easy to skip it and test another one. The testing of new services costs way less than you think and it gives you possibilities to react to actual business problems – and not only to the problems that you can afford, have time to solve or know how to fix!

Researcher Antero Järvi from University of Turku said this to me when I asked about how to integrate SaaS services to the rest of the applications: “Don’t integrate anything before you use the service for a while and evaluate if you even want to continue to use the service.”

%d bloggers like this: