I’ve talked before about how working with .NET and Mono on Windows and Linux translated to better code. Now I had that experience with C++ and windows. I’ve been working on a Managed C++ component that interacts with OpenSSL (yuck!). The application performed very well on Vista, Win 2k8, Win 7, etc. But on Win XP sometimes it would die abruptly and unexpectedly. We were presented with a very ugly error at the Event Viewer, the error was so strong that our exception handling mechanisms were completely bypassed.

We thought that it could be some problems with linking and libs on Win XP. It wasn’t our fault, it worked on other machines, it could be that Win XP was just that unstable. But how many times are these problems blamed on so many things except us.

The problem was presented to us more directly when running the application on Win 2k3. While the app just died on Win XP, and nothing was wrong on Vista+, Win2k3 said that the application was wrong, and why.

Apparently we were using some managed strings, and due to some copy paste (the bastard!), we were freeing them not with the correct method (FreeHGlobal) but using FreeBSTR. Result: KABOM! Why we didn’t get any error in Win Vista is beyond me. After fixing that, Win XP version is working like a charm.

Cross-platform .NET

November 2nd, 2009

I’ve had a lot of feedback on my running mono article. The majority of feedback was something like: .NET isn’t cross platform, Mono is evil, MS is evil. How can someone say to me that .NET isn’t cross platform when I have an average sized product running out of the box on MS .NET and Mono? I used to participate on a lot of discussions about this a couple of years ago. But then I realized that it was more productive to actually do something instead of discussing these kind of issues.

What makes a Java application cross-platform? Will a Java application be cross-platform if use reference resources like: c:\MyApp\MyApp.ini? And if we have a file named MyApp.ini and get it like getResource(”myapp.ini”)? And if we use specific operating system resources?

The same goes for .NET applications. If the developers are careful, it’s easy to have a cross-platform application. Having a desktop application is hard, that’s true. Microsoft made Windows Forms very Windows specific (using stuff like windows handles), and it’s hard for Mono to make a cross-platform Windows Forms implementation. But there are other alternatives, like GTK#. Even so, I realize that this issue is the least cross-platform of .NET.

But on a web/services scenario, Mono is as cross-platform as you can get.

Running Mono - an Overview

October 29th, 2009

We’ve been using Mono for the production server of Orion’s Belt for a couple of months now. On this article I’d like to share our experiences using Mono. We developed our project fully on Windows with Visual Studio 2005, and at a time we started to consider Mono for a production server. Do note that the Orion’s Belt team has a Windows background, and little experience administrating Linux machines.

Here are the web applications that we are serving with Mono:

Step 1 - Preparing the Server

The game is running on an Ubuntu server. We downloaded the source code and installed it manually. We could have used the packages, but that isn’t versatile enough. Packages aren’t always up to date and we find them hard to manage. For example, how could we have two versions of Mono installed and choose which  to run? How could we always have an up to date version? To fix this problem we chose to install by source, having as a guideline the Parallel Mono Environments article. This was great because we could change to use Mono from SVN or a specific version, just by changing some variables. We wouldn’t want to install a new version, getting problems, and having a bad time to rollback.

We use FastCGI with Nginx to serve the game. Nginx’s really cool, and very easy to configure and manage. We also installed MySQL. Not considering Mono, all the other necessary software was very easy to install and configure, with the help of Google of course. We managed to get the server displaying ASPX pages easily.

Step 2 - Running the Application

The Orion’s Belt project is fairly big, and after some months coding fully on Windows, the move to Linux was peaceful. We had some file case issues, but that was it. We have a NAnt script that creates a deploy package, and we were able to upload it to the mono server and run it. There were some problems at that time, sometimes Mono’s web server would throw compile errors while compiling ASPX pages. But fortunately, at that time, Mono implemented Precompiled ASP.NET web sites in Mono. We incorporated a step on NAnt to precompile the deploy package and everything was faster and we didn’t get those errors anymore.

Other problem we had was the mono web process and the resources it used. Going to 600-900Mb RAM and wasting a lot of CPU, even at idle time. So we started to kill the process from time to time. Sometimes the process would die unexpectedly, so we also started to use supervise, to supervise mono’s process.

There are also some other issues we got along the way: touching Web.config isn’t as stable as it should be. And also when we deployed new versions, mono would not behave properly, it would shutdown or just stop responding. So, we got used to just kill mono when we deployed or needed a reset. It’s very easy, you just kill mono’s process, and supervise will bring it up.

Supporting multi OS makes your code better

It may sound weird, but it’s true. We had a lot of bugs showing up only on mono. For example, we use NHibernate and everything worked fine on Windows, but on Linux sometimes it didn’t. We found out that we needed a flush here and there. On Windows it worked, but on Linux it wasn’t that permissive.

Using Mono brought an interesting mindset to the team. Every time there was a problem, we’d blame it on Mono. But the majority of times, it was our code that wasn’t up to it.

The Linux issues also made us create specific guidelines for file case, forced the use of Path.Combine and related methods. We also tried MoMa, the mono’s problem reporting tool, but we didn’t find it to be that useful on our situation.

Running the Tick

The game’s tick runs every ten minutes. It’s a very heavy process that loads a lot of data from the database, operates on it, and then persists it. This process needs a lot of RAM and CPU to run, and it’s a good performance test. On this specific process we find Mono to be very lacking. If running the process on Mono would take 60 seconds, running the same process on the windows development machine, connecting to the production database, would take 30 seconds. And the development machine is worst than the production server.

However, for the cost of a Windows license, we could buy a great machine just for tick processing. Would it be worth it? We don’t know at the moment.

Conclusion

Preparing the Mono environment was fun and interesting, and the issues we got from porting the code were minimal. We did need some help, and I find the mono list not that friendly to newcomers, but Google provided the help we needed. Even so, it’s not easy for developers without experience administrating Linux machines to prepare mono. There are always some issues here and there, that we’d know how to fix on Windows, but that we loose a lot of time figuring it out on Linux.

Although mono’s behaving nicely most of the times, we don’t find it as stable as a Windows machine. Even so, it’s a great option, that’s for sure. I already have a slicehost account with an ubuntu+mono running all my private ASP.NET sites. It’s cheaper and runs really well.

But for the production server for the game, we aren’t convinced yet if we should continue using mono or not. Maybe we’ll release another server using Windows and then we’ll have a good performance comparison showcase.




MySQL on full UTF8

May 11th, 2009

On the Orion’s Belt Translation Project we had no problems with encoding, until a player from Croatia told us that Croatian’s characters weren’t being properly persisted. I tried a direct update:

update Lang set Text = 'č,ć,ž,đ,š'

And MySQL complained with an invalid characters error. To fix this I had to change the column encoding to UTF8 (was latin1). But the application still wasn’t  behaving properly. The application was sending to MySQL the following query:

update Lang set Text = 'c,c,z,d,s'

I don’t know who was transforming this (maybe the MySQL connector). To fix this I had to edit the connection string, and add the utf8 charset:

Server=s;User ID=u;Password=p;Database=d;CharSet=utf8

After these steps, everything worked fine.

Twitter API from C#

October 24th, 2008

I did this code to test the API interaction with twitter via .NET. You can see it in action on the Orion's Beltt twitter page. The game logs the result of every battle on that twitter account.

C#:
  1. private const string TwitterJsonUrl = "http://twitter.com/statuses/update.json";
  2. private const string TwitterUser = "your_user";
  3. private const string TwitterPass = "your_pass";
  4.  
  5. private static void SendTwitterMessage( string message )
  6. {
  7.     try {
  8.         HttpWebRequest request = (HttpWebRequest) HttpWebRequest.Create(TwitterJsonUrl);
  9.    
  10.         string post = string.Empty;
  11.         using( TextWriter writer = new StringWriter() ) {
  12.             writer.Write("status={0}", HttpUtility.UrlEncode(message));
  13.             post = writer.ToString();
  14.             Console.WriteLine("Post: {0}", post);
  15.         }
  16.    
  17.         SetRequestParams(request);
  18.  
  19.         request.Credentials = new NetworkCredential(TwitterUser, TwitterPass);
  20.  
  21.         using( Stream requestStream = request.GetRequestStream() ) {
  22.             using( StreamWriter writer = new StreamWriter(requestStream) ) {
  23.                 writer.Write(post);
  24.             }
  25.         }
  26.    
  27.         Console.WriteLine("Length: {0}", request.ContentLength);
  28.         Console.WriteLine("Address: {0}", request.Address);
  29.  
  30.         WebResponse response = request.GetResponse();
  31.         string content;
  32.    
  33.         using( Stream responseStream = response.GetResponseStream() ) {
  34.             using( StreamReader reader = new StreamReader(responseStream) ) {
  35.                 content = reader.ReadToEnd();
  36.             }
  37.         }
  38.  
  39.         Console.WriteLine(content);
  40.    
  41.     }
  42.     catch( Exception ex )
  43.     {
  44.         Console.WriteLine(ex);
  45.     }
  46. }
  47.  
  48. private static void SetRequestParams( HttpWebRequest request )
  49. {
  50.     request.Timeout = 500000;
  51.     request.Method = "POST";
  52.     request.ContentType = "application/x-www-form-urlencoded";
  53.     //request.Referer = "http://www.orionsbelt.eu";
  54.     request.UserAgent = "Orion's Belt Notifier Bot";
  55. #if USE_PROXY
  56.     request.Proxy = new WebProxy("http://localhost:8080", false);
  57. #endif
  58. }

Big SEO Mistake!

July 30th, 2008

We launched a site about a month ago using all the SEO guidelines except the "use the keywords at the url rule". I don't know why, but I never really cared about that rule. If google only cares about good content, why should the url care? Even so, I wanted to do everything by the book, so I added the keywords to the title. My urls were something like this:

Page.aspx?Id=1

It would be hard to change the site structure to use urls like Keyword1/Keyword2/, so I tried a little trick. I supposed that Google just wanted the keywords on the url, google doesn't really care about strucuture, so I changed the urls to add a dummy ref parameter:

Page.aspx?Id=1&ref=Keyword1/Keyword2

Well, that was a quick solution, and it worked like a charm. However, I already had several pages indexed using the old url structure. No problem, I'll just permanently redirect them to the new structure, I tought. And so, I did the following:

C#:
  1. // permanently redirect
  2. HttpContext.Current.Response.StatusCode = 301;
  3. HttpContext.Current.Response.Redirect(theNewUrl);

And the visits sent by Google went like this:

And I didn't know why... maybe it was some re-indexing, the site was fairly recent. But that was strange. I do know that Google penalizes websites with duplicated content. And with my new url structure change, I now had two different ways to access the same page. That could be it, but the permanently redirect should have fixed that. Well, wget to the rescue! Just for curiosity I run:

wget my.website.com

And the output was something like:

...
HTTP request sent, awaiting response... 302 Found
Location: theNewUrl [following]
...

Well, that's problematic, the website was returning a 302 status code, and what it should return was a 301 status code. With this configuration I really had 2 urls for the same page witch Google may have found to be duplicates. My problem was with the redirect code, because Response.Redirect internally sets the status code to 302. I solved it way:

C#:
  1. HttpContext.Current.Response.StatusCode = 301;
  2. HttpContext.Current.Response.AddHeader("Location", theNewUrl);

And now, wget shows the correct information:

...
HTTP request sent, awaiting response: 301 Moved Permanently
Location: theNewUrl [following]
...

I really hope that the website recovers from this. It may take a while, we'll see.

I have a project where I need to process every night large amounts of data. Basically I have to fetch a lot of objects and related collections from the DB using NHibernate, operate on them, and then do some updates. This process usually takes about 2-3 hours.

Today I discovered a way to brutally increase the application speed. I noticed that the application started fast, but would gradually become slower. I didn't know why, so I tried to free already used objects and splitting the process into several separate instances. This helped only a little. I'm doing something like this (we use a facade on top of NHibernate, so: different types, but the same concepts/names):

C#:
  1. public static void Operate()
  2. {
  3.     using( IPersistanceSession
  4.         session = GetNHibernateSession(MasterSession) ) {
  5.        
  6.         IList list = GetALotOfObjects();
  7.         foreach( object obj in list ) {
  8.             Operate(session, obj);
  9.         }
  10.        
  11.     }
  12. }

This code is very slow. Some time ago, I had to go into NHibernate source code to understand how I could use sessions with ASP.NET, and I noticed that NHibernate uses a lot of internal objects. Well, on this case, I am fetching a lot of objects that have no relation with each other, so, I guessed I could clear the session from time to time... And adding a clear, boosted the speed considerably:

C#:
  1. public static void Operate()
  2. {
  3.     using( IPersistanceSession
  4.         session = GetNHibernateSession(MasterSession) ) {
  5.        
  6.         session.Clear();
  7.        
  8.         IList list = GetALotOfObjects();
  9.         foreach( object obj in list ) {
  10.             session.Clear();
  11.             Operate(session, obj);
  12.         }
  13.        
  14.     }
  15. }

I really can't explain this as my knowledge of NHibernate internals is minimal, but believe me when I say that Clear made a difference... a 2/3h to 10 minutos difference!

Bonus for having a child

July 16th, 2008

A friend of mine that works at Dynargie told me this weekend that the company announced that they would pay a bonus of 1000€ to every employee having a child! They'll pay 400€ upon birth, 200€ on the first year and the rest on the second year.

Sometimes we see women being fired just for becoming pregnant, so I see this bonus as great way to motivate employees (and also to keep them around for two years). Kudos for Dynargie!

I'm becoming a big fan of the Google Chart API, because it's a very easy way to add some charts to a web application. All you have to do is to choose a chart type, bundle in the data, and google will provide you with clean, pretty chart. No more configurations, no third party dependencies, it's as easy as it gets.

I've recently incorporated google charts in what I like to call our application's evolution data. Good metrics are always important, and there are several tools out there like Google Analytics that provide us with good data. The problem is, that these tools only handle visits, pageviews, referrals, keywords, etc. All generic information that makes sense for all sites. But what about custom application information?

How can we track the number of active users, last day logins, number of new users per day, and other useful metrics? This is what I call application evolution, and I'll show how I've achieved it using the Google Chart API.

The Objective

My objective is to have a simple representation of metrics evolution. The metric can be anything, and for each metric I want to present on the admin area, a chart and the last registered value. Example:

Chart API Example 77 Active Users

Using Google Chart API with C#

We interact with the API using urls. We build an url with the chart info, and google will return the chart. The documentation on their site is really helpful. So, let's break down the previous chart. The url for that chart is:

http://chart.apis.google.com/chart?cht=ls&chd=t:10.0,58.0,95.0,40,50,10,5,99,0,8,44,32,78,65,43,21,50,10,5,99,0,8,44,32,78,65,43,21&chs=200x40&chm=B,DEEAF3,0,0,0&chco=6797CB&chf=bg,s,444444

If we split the url, we get:

  • http://chart.apis.google.com/chart? - The endpoint
  • cht=ls - The chart type; this one is the sparkline: it's like a line chart without the axes, very analytics like
  • chd=t:10.0,58.0,95.0,40,50,(...) - The actual chart data, using text encoding. With this encoding you specify your data with percentages, separated by a comma. If you need a larger value collection you might want to consider the extended encoding
  • chs=200x40 - The chart size
  • chm=(...)&chco=(...)&chf=(...) - The chart colors

Preparing the Metrics

We can only build the charts after gathering the metrics. To gather metrics I use a XML file that is created everyday. The file looks like this:

XML:
  1. <Metrics Date='7/7/2008 2:58:47 PM'>
  2.   <UserMetrics>
  3.     <NewUsers Value='1' />
  4.     <ActiveUsers Value='0' />
  5.     <RegisteredUsers Value='4' />
  6.     <LastDayLogins Value='3' />
  7.   </UserMetrics>
  8.   <EntityCountMetrics>
  9.     <ExceptionInfo Value='2' />
  10.   </EntityCountMetrics>
  11. </Metrics>

Each day I create a XML file like this with the name Metrics[Year]-[DayOfYear].xml.

From Metrics to Charts

Having the XML files, and the API to graphically present that information, we only need to do the actual data import. Once a day I open up all the previous 30 files and build a list of values per metric. I start by creating a XPathDocument for each file and with it I extract one value per metric, that I'll register on the global metrics dictionary. After this step, I just have to build the chart URL.

I write all the charts for all the metrics on an HTML file that is imported by the administration front page. Here's an example:


(click to enlarge)

This is nice, but what I would really like, was to provide my custom metrics to Google Analytics, and have them do all the storing, manipulation, presentation, etc. I mailed them suggesting this new feature, but I don't know why, I never got a reply! :P

Geek people always like to argue about several useless topics, and the StringBuilder vs StringWriter was my latest .net techy discussion. This one is really even more useless than the usual topics. Basically, we have two objects that perform exactly the same: they contat/format strings.

Why do they both exist? Well, I don't know. Based on the MSDN documentation, it seams that the StringWriter is just a facade for a StringBuilder, that implements TextWriter. And that is the reason why I prefer to use the StringWriter. StringBuilder has an alien interface, methods like Append, AppendFormat, only exist on the StringBuilder class. However, the StringWriter has all the know methods like Write, WriteLine, etc.

But the main reason isn't just the names. By using TextWriters, we can write generic code, por example, consider this method:

C#:
  1. public void WriteXml( TextWriter writer ) {}

We can now invoke it to build a memory representation (StringWriter), we can write the XML in an ASP.NET control (HtmlTextWriter) or we could just write it to Console.Out. Obviously, we could also use the StringBuilder and the write its contents to the TextWriter... but that would definitely be uglier.

So... a... this proves mathematically and without a doubt that StringWriter wins the battle.