Random Ramblings About Making Games and Stuff from Cloud

We producers tend trust processes and forget that trust, communication, skill and determination is a huge thing. We tend to rely on Scrum, Kanpan or similar over persons skill and expertise. I know I did. I truly believe that we try to learn from the past projects and even postmortems written by kind folk in Gamasutra, but we usually miss the true reasons why we fail in a game dev projects.

Making software is hard. Making games is damn near impossible. We keep making mistakes and we try to find reasons and that is what we are supposed to do. The things where we usually make mistakes are not the ones that following a processes will solve. Problem might be that we really don’t know how to make a fun game. We know how to make games and we want to believe that we know what a is a fun game, but ultimately it is the players that will tell us how we did. When we fail we tend to take a good look on what went wrong and make adjustments on the processes. While reflecting the past is a good idea we tend to dismiss the persons behind the process. Usually its not that mistake was made because we did not have full control over what happened. More often reason is that direction, goals or communication was not clear. In some cases we just did not have the skills or knowledge. I don’t think that these situations can be solved with adding more and more control by fixing processes.

In my opinion processes should not be treated like silver bullets that work all the time with different companies, projects and team members. No process is a substitute for skill, motivation, communication and ownership. If the team and its members don’t want or know how to or are not allowed to make a fun game I believe it can’t be done with any processes. Using strict processes might tell you that project is not going according to plan, but it will not launch a game. I challenge you to find what works best with your team and with each member of the team and heavily modify the ways you currently work if needed. It will not be easy but as I found it will pay off in the end. The process should give all the people in the team clear goals, improve communication and build trust. In most of companies I used to work we used processes to get control over team, games design direction, get visibility on how well team did their work and usually people actually working in the team did not find them that useful.

I was recently fortunate enough to work with a dream team and it opened my eyes. We managed to launch a top quality game in nine months with next to no pre-production. During that time I had one of the easiest projects in my producer experience. It was odd since we had a hard launch with fixed deadline and a brand that demanded excellence on quality and fun factor. Reason why it was easy was that I was able to truly trust that team was doing their best. Even when the game’s design direction was fluctuating quite a bit, I managed to remain calm because I knew that the team believed it was the best thing to do. I could fully focus on producing: foreseeing possible problems, getting help to the team when needed, reminding the team about goals and schedule, communicate what are we doing and why to the rest of the company and keeping the troops fed, happy and eyes on the big picture.

After our nine months game project I believe I have a hunch on how game projects should be directed and how highly productive team should be built. It is essential to trust the team and that the team is trusted by the company. We should let the professionals to do their work and make the decisions even if you don’t fully agree on everything. That is why you (or someone else) hired them in the first place. When the team works well together, has the skills, motivation and clear direction where to take the game to, then you don’t need heavy processes just communication and shared goals. Just agree the goals with the team, decide who does what and go for it. Follow up on the goals on high level by looking at the daily build. Changing the goals should be as easy as identifying that current ones are no longer relevant and agreeing with the new ones.

How we worked as a team

Word of warning before you read on. Our team had been working for a while together and we knew how we work together. The tools we used and the game genre was familiar to us and team members were highly motivated and skillful professionals. So we knew what we were doing. So don’t apply following ways-of-work without considering if they can work for your team.

Extra warning for the not so indies out there. We really had a creative freedom, peace to operate and we were allowed to make decisions inside the team. The following things will not work if you don’t have the power and freedom to make decisions that your team believes are best for the game.

What worked in our team was one week short sprints with goals like “character abilities for playable demo” and “players are able to purchase gold from shop”. In start of the sprint we had a meeting with leads and we agreed with the goals. Next developers had their own meeting where they agreed on who does what and how to get there. One developer was reserved for designer to code game-play stuff as soon as possible without heavy game design documentation. We had two daily meetings where we showed to the team what we did and what we are planning to do next. Only record of things to do was a whiteboard with huge calendar to remind us how much time we had left. Whiteboard was redesigned couple of times in the project to better match the current state of project. Only thing that remained stable was printed A4 papers labeled Important in top of the white board and not Important in the bottom of the white board. Targets, goals and tasks where put on top and not important ones to lower part of the white board. Usually not important ones were removed from the wall without actually even implementing them at all. And that was the whole process it looks and smells a lot like scrum, but, with two main differences. One the process was designed to give as much information to the team keeping the focus on the end results and not the tasks. Second differences is that we did not measure or monitor how long tasks did to complete. We only looked what was in the game build in each day, at times by each dev build per hour.

What comes to working with the people in the team it varied. One developer preferred to work with a prioritized list so we made it for his tasks. One developer preferred to take one huge goal like “payment system” or “event system” and work with that until it was completed. One developer liked to prototype so we paired him with the designer and they talked together on what would be the next thing to try out. One artist liked to work on the scenes and another wanted to have list of game assets to work with and so on. What I am saying is that only thing that was uniform on how we tracked the progress was two times per day we showed what we had done and we talked a lot to each other and we trusted each others opinions and decisions.

Final words

So anyway this was what I learned in the last nine months on what is important in game development and how highly productive team can be build. Hopefully you find something useful from my ramblings. Start trusting people and see where that takes you.

Thanks for reading and hopefully you got something out of this post.

Also published in Gamasutra as featured blog post.

Tools for decision making

One big problem in game development projects is that we are in constant doubts about the game we are working on. Do we need another game mode? Should we change graphics because they seem too dark? Is this fun? Should we add this new feature? Should we even be making this game in the first place? Is our game good enough because our competitors have much better graphics? We are constantly making decisions based on hunch, fear and panic. To make things worse we are constantly end up revisiting the same problems and decisions over and over again.

I personally believe in the mantra “If your only tool is hammer then every problem looks like a nail”. When you want to stick two pieces of lumber together there are nails, screws and glue just to mention a few that pop into my mind. Same applies when you are making a decision. When it comes to making decisions, there are tools that will help you to make them and have confidence to stand behind them. I don’t have a magical knowledge on when to apply a certain tool on certain problem so it would always result in best decision available. When it comes to making decisions in various game development stages reality is that every game and people making the decisions are different.

So what this blog post means to you and me? Hopefully you will find some new tools or at least new perspectives on how decision-making can be less paralyzing and how to make sanity checks on your decisions. For me the best thing that might come from this blog post is your comments. I want to hear what tools you have in your arsenal and what new perspectives I could learn from you.

First tool that I use in these situations is to ask from someone else who is not working on the game. My animator friend described one of the problem of animating a movie as follows:  “When you are working with animation movie it takes a time. Animating a movie takes so much time that you must have a really good reason to make it into animation in the first place. It would be so much faster to just shoot it into video using live actors or draw a comic book. There will be a time when you will start to hate your story and you want to make changes. This is just because you have worked with the project for so long. When you find your self wanting to change things in mid animation project, the first thing you should do is to ask someone else’s opinion”. In a nutshell this means that you will be fed up by you game project before you have finished it. Sometimes these changes can enhance the game, but sometimes you will want to change things just to amuse yourself. So before you start making changes ask for external opinion. Help can be found from fans, a colleague or a family member just to name a few. As long as those people have not been involved with your project as heavily as you they might have a better understanding of what is new and fun in your game. Of course you should not take their word as an absolute truth because in game development there are a huge amount of uncertainty until the game is polished and it is in the hands of the players.

Second tool is to always remember why you are making the game on and steer the projects towards that goal. One of the most useful piece of knowledge was passed to me just a few weeks ago. “You should always be aware why you are doing something. After that, have an idea how to measure your success when your game is live (or in alpha testing). If everybody in the team understands reasons why this game is being made, they will more likely make the right decisions.” So you should always be aware why you are making a game or an update and everybody in the project team should know those reasons. Reasons may vary from personal ambitions “I want to make this kind of a game” to more business oriented “we need to increase our daily active user count”. Regardless of the origin or the reason it self, it needs to be stated aloud and measured accordingly. If you do that, then it is easier to make decisions. It is always possible to revisit the reasons why you are making something and when you do change the reason don’t be afraid to redesign the whole game or cancel the project.

Third tool in my arsenal is measure and analyze your success and learn from your mistakes. Following piece of helpful encouragement was handed to me by my boss once “You are going to make mistakes. A lot of them. And if you don’t follow the impact of your desired targets, there is high change that you will never learn from your decisions (and how to make them)”. Be prepared to measure your success in some measurable way and learn from your mistakes. If increasing daily active user is your aim, measure the effect post launch, analyze your results and learn from them. If you where making a totally new kind of a game, then check from reviews and comments if the press and gamers really noticed that this is something totally new. You should also try to learn how you and your team make decisions. Keep decision log where you list decisions, dates, people involved and reasons why that decision was made. Use that list when you are facing a situation where you might want to change direction or revert a decision you already made. Also make decision log review a part of your postmortem process and try to learn from it. If you are lucky you even might learn from mistakes of others. Read lots of postmortems and ask how other people make decisions in your workplace or startup sparring circle.

Fourth tool would be fail fast and don’t be afraid to make mistakes. My boss and numerous excellent blog post in Gamasutra (1, 2) already recognize this. You are going to make mistakes and a lot of them. Don’t be afraid to try something out. You should also be equally willing to confess it to your self when the thing you tried out does not work. Don’t force something in to a game just because you made a decision about it. Do, review, iterate and abandon ideas in fast cycles.

Well that was my two cents on how decision-making could be easier and how you and your team decision making could be more efficient. I would love to hear your experiences, comments and ideas. I am always looking opportunities to learn from the best.

Also posted in Gamasutra as a featured blog post.

I wanted to build an Windows Phone 7 app called “Stuff I Like”. App would have OData service running in Azure and client that would cache and sync to that service. In previous post I wrote about stuff that I learned building the WP7 app. In this part I will reveal my findings on the service side of things. In the next post I will dwell into Authentication side of the app.

If you are planning to add Federated Authentication using Azure Access Control service then you might want to start by building authentication first. For me it was waaaay easier to add normal aps web page add authentication to that and add WCF data service after login worked. For you I would recommend downloading Azure Training Kit: http://www.microsoft.com/en-us/download/details.aspx?id=8396 and completing Lab Exercise http://msdn.microsoft.com/en-us/identitytrainingcourse_acsandwindowsphone7.aspx

Now back to business.

When I followed this Data service in the cloud http://msdn.microsoft.com/en-us/data/gg192994. I decided to design the data model using visual studio design tool. After playing a round with the tool I managed to “draw” data model and after that it was just a matter of syncing it to Database.

This is how I draw the data model

After syncing above data model into database, setting up the service and running it I quickly found that my WCF Data Services where not working at all. Message “The server encountered an error processing the request. See server logs for more details.” was shown to me quite frequently. Well “the bug” was quite simple to fix and these debugging instructions helped me a lot http://www.bondigeek.com/blog/2010/12/11/debugging-wcf-data-services:

  1. A missing SetEntitySetAccessRule
  2. A missing pluralisation on the SetEntitySetAccessRule
  3. A missing SetServiceOperationAccessRule

After Debugged EntityAccessRules with simple “*” allowRead just to check that I had not made a typo I quickly found out that I indeed had 😦 So I after I fixed a typo in EdmRelationshipAttribute and it caused the exception. After that stupid mistake things started to look better.

If you need more instructions on how to turn on debugging messages then just follow these instructions:

This is how my service looked after I finally got it running

After I managed to get service defined and running I took a second to make OData feed a bit more readable so that you can consume your OData feed using Explorer and other browsers. http://msdn.microsoft.com/en-us/library/ee373839.aspx

This is how OData feed looks before you tweak it a bit.

For some reason I managed to first add “m:FC_TargetPath” and similar properties to wrong xml element. So make sure you scroll down the file and add it to correct place 🙂

This is how OData feed will look when you tweak it a bit

Another thing that took me couple of hours to figure out was that Explorer does not show all OData results in consistent way. So before you start heavily debugging check the returned html page source code and you should see expected result in XML format. OR you could use another browser. For example this call did not seem to return anything http://localhost:57510/WcfDataServiceStuffILike.svc/StuffSet(guid’cf1bfd2f-99f3-4047-99f8-22bc1aad1b99′)/GategorySet until I checked the source code of this page.

So that’s it. Using Visual Studio this was quite easy and I actually spent most of my time figuring out why some configuration did not work than making the code.  This might be due to the fact that I have “unclean” dev environment or I made lots of changes to above demos while I followed them. This was mainly due to the fact that I wanted to build my own app and not simply type in demos and labs.

I bet If you build your dev environment correctly and follow the labs and demos to the letter you won’t see as many problems that I witnessed. But where is the fun in that 😉

I recently started playing with around with Windows Phone 7 devkit and found it to be surprisingly productive tool set. I managed to develop a simple app within couple of weekends with almost zero previous knowledge on WP7 programming. I learned a couple of things on the way and wanted to tell them to you.

What I did was I started with installing Expression Blend, Visual Studio 2010 and WP7 SDK. I made UI using Blend and did the coding with Visual Studio. Benefits of this were near WYSIWYG experience with WP7 UI and no broken XAML during development.

And now the tips what I learned:

  • Start with the data. Design the data model first because it will help you working with Expression Blend. Make design time data for your blend project and how to trouple shoot it.
  • Trust the Blend. Use it and learn it. I admit that it took good amount of time to find how to make all of the stuff. Some weirdness that I found hard to locate:
    • Databinding to UI items
    • Table you need to add row and column definitions (found under layout)
    • To add more complex list box items than just plain text you need to edit List box item template. Found under list box right mouse click, edit generated Items, edit current.
    • This is how to create menu context items.
    • This is how to create application bar.
    • Use system styles in text boxes PhoneForeground etc. So if user changes the phone theme your app will do the same.
    • DO NOT apply system styles to list boxes! It will break selected styles. Adjust the font size and font or make you own styles that only change font and font size. Otherwise you will spend time on implementing list box selected item visualization.
  • BIND UI to data. Seriously this will help a lot. You should not assign data to UI components. All my weird UI bugs where data is not updating where result of not doing the BIND correctly.
  • Test BIND working on your data periodically. This is because you want BINDING to work.
  • Make your data implement data as INotifyPropertyChanged change so that when data changes it automatically changes on UI.
  • Define data as serialisable.
  • Icons where to find default icons: “C:\Program Files (x86)\Microsoft SDKs\Windows Phone\v7.1\Icons”
  • How to make clear enough icons. You can also use Inkscape with 1028×1028 canvas and convert image to png.

Well that’s for now. I will make follow up posts on things I learn during my ventures on WP7 world.

ps. Here are some screenshots on the app I am working on. Its called “Stuff I Like” and its a simple tool for keeping track on stuff I hear about and might like.

I stumbled upon Red Gate Azure Backup service: https://cloudservices.red-gate.com/

After using the beta for couple of weeks I am really impressed. Everything is easy, UI is intuitive and you don’t need lots of configuration to make this work. You can setup basic daily backup work in just matter of minutes. If you got Azure storage or Amazon S3 account ready. If not then it still does not take long. You spend most of the time digging up usernames and passwords not figuring out how things work! Splendid.

So let’s go through how to do daily Azure SQL backup into Amazon S3 storage.

Red-Gate Cloud Service Dashboard is really clean and simple. Just select the action you want to perform.

First after registering to Red-Gate Cloud Services and login select “Backup SQL Azure to Amazon S3” from Dashboard.

Just fill in the Azure SQL credentials and Amazon S3 keys.

Secondly just fill in Azure SQL server name xxxxxxx..database.windows.net, login credentials to that DB and press refresh button next to Database drop down. Select database that you want to backup. Next click on the “AWS Security Credentials” link and login to your Amazon AWS account. You will be taken directly to place where you can find the keys.

"Access key id" goes to "AWS Access Key" and "Secret Access Key" is shown after pressing "show" link in Amazon AWS.

 Note that you need S3 Bucket. If you don’t have one you can create it from here https://console.aws.amazon.com/s3/home I will not explain how that is done, but it is really easy. After you have inserted Amazon keys you can press the refresh button next to Bucket dropdown in Red-Gate UI. Just fill in the name that you want your backup file to have. Don’t worry about the day stamp because this tool will add date at the end of the filename. Press “continue” button to select scheduling options or backup now.

Just select when you want your daily backup to run or just press "Backup now" button.

 Next just fill in exact time you want your backup in 24h format, select time zone, select week days when to run this and press schedule button. You can also just press backup now button for one time backup. Note that there is also a monthly backup option.

You should see your scheduled backup in next screen. When your first backup is done it appears here under history header.

Next you are taken into schedules and history view. Here you can view details of the upcoming backup operation, cancel it or invocate backup now. Just move your mouse over the upcoming event.

Just move mouse on top of scheduled backup job and you can cancel job or invocate it to happen right now.

When backup job has been successfully completed you will receive an email and a “log” line will appear in “schedules and backup history”.

If job is successful or not it will appear here under history title

 You can view details of executed jobs by moving mouse on top of history line. If you backup job has run into errors you will see an error triangle as well as receive error message via email. Email also contains a direct link to that specific History log line. Neat!

If you backup job has exceptions it has warning icon in history. Details link will contain error message of what went wrong.

 Clicking on the link on the email will direct you to same view as clicking “details” next to completed job.

Here you can read throught what happened while executing backup. If you ran into errors they are also present. Just scroll down from the bar.

If you got your backup run cleanly you should see the created pacbac file in Amazon S3 bucket.

Backup file is safe and sound in Amazon S3

Note that you can just as easily use Azure Storage services or use FTP to upload the backup to you own backup architecture.

There are couple of missing features that I would like to see. For example similar choice than in Red-Gate backup tool to first create copy of database (for transactional safety) and ability to encrypt the backup file before it is uploaded into Amazon or Azure storages. But event without these small features this is still awesome!

Carefree clouding!

I just started to use this SQL Azure and SQL Server Monitoring tool http://www.cotega.com/ and I must say that it is exactly what I was looking for in a easy to use monitoring tool. I don’t need to be complex, feature rich and massive monitoring tool. I only need a alert when database is not accessible, if performance seems to be below average or there is a spike in the usage. Service is stil in beta but it looks really promising.

Just click "add SQL Azure database"

Setting up the database connection was easy. After login you are presented with Dashboard view. Just click “add SQL Azure database and fill in the database address, username and password to the popup. Finally press “Add database button”.

Fill in the address and login details.

You can add notifications to you databases by going to Notifications view and pressing “Create Notification” button.

To add new notification just press "Create Notification"

Just fill the name of the notification, select monitoring rule from “select what you want to monitor”, database and the polling frequency.

Fill in the details and select monitoring target.

After you have selected monitoring target you can select when to create a notification. In this case I have selected “when connection fails”. After this you can fill in a email address and even select a stored procedure to be run. But with connection fails case that does not make sense :). Finally press “Add Notification”.

Add email address and select stored procedure if you need one

There you go. Just start waiting those notifications to kick in.
There is a nice feature in the notifications: When you start to receive lots of them for example conserning “connection fails” you can temporarily disable the notification directly from the notification.

When you get many same type of Notifications you have option to disable the Notification directly from the email.

You can check the logs for specific notification to make sure that rule works.

You can see how rules are being evaluated.

Also you can view this info in a report view for performance ananlysis purposes.

Its easier to spot performance problems from visualized reports.

Very potential neat litle monitoring tool for those who don’t need complex solution for simple problem!

What Azure SQL was missing was a proper supported way to make backups for disaster scenarios. Those scenarios would include loss of control over Azure SQL server or human error causing SQL admin to delete whole Azure SQL server. Great news everyone Azure SQL now has tools to mitigate the impact of these scenarios. SQL Azure Import/Export Service CTP is now available. More details on how this works can be found here.

What do you need?

You need means of scheduling backup, Azure Storage account where to put bacpac files, means to get the exact url of bacpac file and a Azure SQL account to backup.

Azure Account

You need one Azure account with Azure SQL and Storage server of course. 🙂

You might want to have separate backup storage account because, if you cannot access your production account you still have access to your backups. I personally download bacpac backups from Azure to local server.

Bacpac file export tool

In this example I will use Red-gate backup tool. This is because this tool allows you to easily make a database copy of your database before the backup. This will allow you to make transactionally safe backups.

But you can also use DAC SQL Azure Import Export Service Client V 1.2. In this tool you need to make a database copy using for example Cerebrata cmdlets.

Azure Storage account browser

You can use any tool you like. In this example I will use Azure Storage Explorer because it is free 🙂

Windows server

Ideally this would be dedicated windows 2008 server so that you can be sure that it runs smoothly. You can also download exported bacpac files to this server. Just to be safe 🙂

Backup using Red-Gate command line backup tool

Here is a nice how to video how to setup scripts using Red-Gate tool. NOTE that in this video script will make a backup into different database. What we want to do is schedule a bacpac file generation. If you want to test how bacpac file generation works without command line watch this video.

But back to business: your scheduled script should look something like this:

RedGate.SQLAzureBackupCommandLine.exe  /AzureServer:[url_to_azure_server]  AzureDatabase:[databasename] /AzureUserName:[db_owner_username] /AzurePassword:[password] /CreateCopy /StorageAccount:[Azure_account_name] /AccessKey:[primary_or_secondary_azure_storage_key]  /Container:[container_name_in_storage] /Filename:[filename_of_bacpac]

Notice that I did not use any real values in above script. Just fill in the parameter between [ ] and schedule this script to as often you like. Note that you need to change the file name on every run because over writing with a same file name is does not work. I use date+time combinations.

So now we are ready for disasters 😉 Next I will explain how to perform a restore to totally new Azure SQL server.

Restore using Windows Azure management portal

When you need to restore a database to new server you need to have access to bacpac backup file in Azure storage account.

  • Firstly create new Azure SQL server. How to video of that is here.
  • Secondly and this is important add same database logins that the backedup database had. Azure SQL user management is explained in detail here.
  • Thirdly get url of the pacbac file using Azure Storage Explorer. Open Azure Storage Explorer and login to storage account containing the bacpac file. Select the bacpac file and press view button. That is explained in the video of next step.
  • Fourthly you are ready to restore! It can be done like explained in this video.

If you have any comments or questions please don’t hesitate to ask.

Thanks for reading and happy backupping!

Still free or did you lock yourself already?

I wrote this post originally to my company blog.

In the press and in many online forums there’s a lively discussion about so-called vendor lock-in, a discussion about customers facing a situation where it is very difficult to replace the existing product with a new one, or the replacement costs are prohibitively high.

Indeed, vendor lock-in deserves your attention, it is good to stop and think before making software purchasing decisions. Let me list some common questions that may come into your mind:

  • What if it is very difficult or impossible to get my own data out of the system?
  • What should I do in case where my software provider drastically raises the license fees?
  • What if my present provider stops supporting the product I’ve bought when a new version of it is coming to the market?
  • What if I’ll find that another provider offers much nicer and suitable solution for me, and I wish to export all my data into a new one instead?
  • What if I am forced to take the new version of the present solution into use, will it support our good old way of doing things – or must the entire organization AGAIN learn to use a new system?
  • Must I once again survive a long and laborious implementation project?
  • Do I make myself a ‘hostage’ by taking this software in use?
  • What if the terms of use of a software component become indefensible and must we migrate from one software to another?

All these above-mentioned points lead to somewhat unpleasant situation. And no one seems to have solution to it. Googling (or binging) the notion ‘vendor lock-in’ does not bring up any solutions; only fears, threats, and horror stories. Vendor lock-in questions and fears have recently been rising into discussions thanks to the strong presence of various SaaS solutions and cloud computing technologies.

However the fact is that vendor-lock in can’t be entirely avoided, it always occurs – to some extent.I mean always. Both with Cloud/SaaS and with in-house software. But there’s a difference between these deployment methods.

The data of the on-premise and in-house software exists on your own servers and databases. In a way, it can feel safe, but the truth is that very often this data is unusable without the (internal) consulting project in which IT professionals are transferring, converting, and creating data for the next system. As you might guess, after that kind of approach it is common that a rather heavy implementation project starts; installations, testing teams, training sessions, and eventual renewal of business processes.

All this resulting in a situation where the end result can be admired after 8 months project…and having an effect on the company results after a year! Way too slow.  And oops, did the business environment change during that time?

I dare also to claim that on-premise software is most often chosen without 100% knowledge of the data portability. In case that is checked out, is it often done by asking the vendor sales person “And the data can be easily imported?” Of course the answer is ‘yes’ accompanied with lots of reassuring in form of head nodding, and no one really checks that up in practice – before it’s too late to react on that.

SaaS services and most of the cloud providers do not require any technical installations; hence the implementation project is much slimmer. The ‘worst’ workload might occur with the training part.  However the best SaaS solutions are so simple to use that a massive training program is often not needed.

I’d say it is quite rude to frighten up customers by speaking ill of the vendor lock-in with SaaS and cloud computing providers, if one offers on-premise software as a solution. There is a risk that you have locked in much more, as part of your IT infrastructure, process development, versions of your desktop clients and even operating systems become dependent to the on-premise software. The license prices of a ‘normal’ software could easily bounce up, as of any vendors.  Just ask one ERP client 🙂

The vendor lock-in of the SaaS vendors is at its highest only as deep as with a ‘normal’ software. But most often less deeper and less risky, I claim.

If you want to minimize your vendor lock-in, it is not about whether you take an in-house, on-premise or SaaS product, but about which SaaS product you choose.

Before headlong SaaS decision you should at least browse through the below-mentioned articles. These might help you to better understand your options.

When you’re in the cloud, you’re ordering a cup of coffee, not the coffee maker.

In English:

http://info.isutility.com/bid/63587/Top-Cloud-Computing-Implementation-Concerns-and-how-to-address-them

In Finnish:

http://blog.sopima.com/2011/04/11/sekaisin-pilvipalveluista/

http://soft.utu.fi/saas/

What do you really want to accomplish with Service Level Agreement (SLA)? To punish or to get the best support available, as soon as possible? With the traditional on premise software if there is a problem you are pretty much all alone with it.The time and the money between binary hitting the fan and fan being fixed is solely coming from your pocket. In the traditional on-premise or dedicated server software environment SLA makes sense. You need to have some leverage and certainty that your software provider is at least mildly interested in fixing your problem.

When “Cloud Computing” is hit with speed bumps the whole Internet is holding its breath. Latest example being Amazon incident on April 21 2011. If SaaS service is not up and running SaaS firm is losing lot of money and very fast. Most of the SaaS firms have monthly recurring revenue model and customers can cancel subscription within one month notice. This means that customers can vote with their wallet. So you can be sure that a SaaS firm gives its fullest attention in order to get their SaaS service up and running as soon as possible. If your provider has fully Clouded its technology (so-called multitenancy), your application instance will be fixed as soon as service is fixed. No one is getting special treatment. Not good or bad. So there is no need to be fearful about your account is not running while others are.

Storm, calm or no cloud? You have no idea.

You should be able to see if your Cloud is in a storm or calm.

With a proper SLA the damages and indemnification are somehow fixed to the amount of money you pay for the software, services included. With SaaS firms your monthly – and even yearly fees – are quite small amount of money and so are the potential liquidated damages. This means that, for example, 10% indemnification of the subscription value is merely a nominal sum. For example, if a SaaS service would cost you 59$ yearly subscription it would entitle you 50 cent compensation for one month downtime. And before you try to negotiate higher indemnification, talk to you own lawyer and ask if you should sign a contract that has liquidated damages clause over 100% of contract value. Next imagine you are asking for it from SaaS firm? And guess the response. My point is that there is no realistic way to imagine a SLA between you and SaaS provider that would have real monetary indemnification. The real penalty for a SaaS provider is in the form of loss of income, increased churn, and negative publicity. Any serious SaaS procvider will do everything to avoid this.

All is well in the cloud

Seek for transparency instead of indemnification.

What I am trying to say is that instead of asking what SLA levels you receive and what kind of compensation is possible, ask what your SaaS provider is doing to minimize the downtime. It does not make sense to try to get SLA agreement as tight as possible. It makes more sense to make sure that provider is “all in” with the cloud. If service you are using truly has significant monetary value for you provider it will make sure that it will run as smoothly as humanly possible.

Make them prove that they know what they are doing, but not with a SLA. Ask about security, availability, recovery and how you can monitor uptime. For example Azure (Service Dashboard) , AzureWatch and Sopima Oy provide RSS feeds to its customers to give transparency of its service levels.

Focus on finding the signs of preemption, transparency and security – instead of indemnification in the SLA.

PS. If you want to know that 10 questions you should ask from your SaaS vendor and what are the correct answers go here.

A cloud worker in his thoughts, yes, that's me thinking about the shadow IT.

I wrote this post originally to my company blog.

A common situation today is that the business units are purchasing SaaS services, secretly without knowledge and/or acceptance of the IT department. The reasons for are many, but at least that they:

  1. do not believe that IT will give them a permission to order the service,
  2. are afraid of that IT will offer some custom-tweaked Intranet setup,
  3. will get “similar” but inferior service, eventually when IT gets time to do it.

How did we reach this point?

I would say that lack of understanding and the illusion of total control are to blame. Similar findings can be found on a recent publication of University of Turku, SaaS Handbook, consultant company Sulava’s Marco Mäkinen’n video about the shadow IT, and in an article in Tietoviikko (the leading Finnish IT professional paper). I’m sorry that previous references are all in Finnish so you just have to take my word for it or use Google translator 🙂

I’ve been thinking this issue a while and my opinion is that the biggest problems are following:

Firstly, IT says “No” because it does not understand the business problem at hands or the benefits that the chosen SaaS solution would offer. As IT professional myself, I too used to start with listing problems and risks whenever a new service was presented to me. I did not believe that the solution would solve the issue because I just simply did not understood how much this would actually help the business units. Ok, I did some soul-searching and found a reason for My Inner Problem Troll. I have had such a bad experience with implementation of new software services: sales cycle is long, price is high, laborious server procurement and installation, mandatory customization and integration projects, bad end-user training experience, and poor usability. And very often after the software implementation project nobody wants to actually use the product – and everybody is feeling sick and tired of it. These experiences lead further to the problem number two.

With an engineer style, we often tend to overdo things and aim for perfection and making everything ready at once. Very typical at least in Finland, Nokialand. The new service must be fully integrated to other systems, the information must not be in silos in the different systems, the service must perfectly adapt to our processes (how outdated they ever are…), and on top of that it must adapt to changing business needs, and finally, the implementation must happen to all personnel at once. Phew. Everything has to be one complete solution.

When the chosen service is finally in production, the business need may have changed and therefore nobody wants to use the service. Summarized; all the integrations, customizations and implementation took too much time and way too much money.

The price of the traditional software product will easily lead to the problem number three.

Does this sound familiar? Because the purchased product was expensive and the implementation was painful, you desperately want to use the licenses and servers for almost any problem you encounter – even if there are better (and less expensive) solutions available. In other works, it is easy to end up trying to save money by using one single product to solve the various needs.

I dare to argue that by doing this way you will end up saving money from the wrong end. As an example: how many of you organize events by email and Excel sheets? Did you know that there are alternatives? If you count the hours you use for excel work and emails and compare this to services like Lyyti (this is a Finnish service) we don’t need to be a Nobel awarded mathematician to understand a) the cost savings and b) increased value to the event participants.

How did we end up here?

The need to control everything, futile (and expensive) perfectionism, fanatic avoidance of data silos and emphasis on the potential risks instead of the benefits have led to a situation, where SaaS services are purchased without the acceptance of the IT management.

So what can we do about it? A lot! The solution is not to add more technology or interfaces.

Software production companies have already accepted that agile development and early trials are the way to get a product that is good enough for the actual business needs and the schedule. So why would IT department want to act in a way that is suspiciously similar to the waterfall method? I would highly recommend added agility and demo mentality are also tested in the IT departments – SaaS services are very easy to test and experiment!

If a SaaS product does not meet your needs, it is really easy to skip it and test another one. The testing of new services costs way less than you think and it gives you possibilities to react to actual business problems – and not only to the problems that you can afford, have time to solve or know how to fix!

Researcher Antero Järvi from University of Turku said this to me when I asked about how to integrate SaaS services to the rest of the applications: “Don’t integrate anything before you use the service for a while and evaluate if you even want to continue to use the service.”

%d bloggers like this: