Saturday, November 22, 2008

Agile Fail?

Slashdot has had links to When Agile Projects Go Bad at cio.com, and James Shore's post on The Decline and Fall of Agile this past week, and it's raised a couple of interesting points for me--mainly concerning the manner in which it is adopted in a business.

The CIO article opens with a piece describing how most people aquire new skills:
In the first stage, people need to follow a recipe. In the second stage, you say 'That's all very nice, but I need more,' so you go off and collect recipes and ideas and techniques. And in the final stage—if you ever get there—is a level of fluency and intuitiveness where you can't say what you're doing, but you kind of borrow and blend.
Introducing Agile as a methodology tend to lead to people in the beginning conforming to a checklist culture. The article then goes onto describe why this almost always has a negative impact on the process of building software.

Some people just assume that by incorporating the practices defined within a particular Agile methodology that makes their business agile. Anyone who has been within a team where this has happened can quite clearly say this is not the case: Weekly releases? Check. Pair programming? Check. Test driven development? Check. Requirements as stories? Check. How does going through the motions of applying these techniques lead to a better product?

Answer: it doesn't. I'm being quite absolute on this: if you're in a team that claims to be "agile" just because you are using some techniques you read in Extreme Programming Explained then think again.

Agile is a mindset you have to get into. It's about trust and professionalism. It's about seeing a problem as a series of stages, and tackling each one separately. It's about communicating with your team and customers constantly. It's about change.

It's not about processes, and yet the processes have become the poster children for the entire movement. Read the original Manifesto for Agile Software Development. Read the Principles behind the Agile Manifesto. There are no concrete techniques there at all.

These ideas are what Agile is about, but its abstract nature is proving to be its downfall. By being so abstract people bought in to the likes of XP and Scrum, where techniques were laid out. And it's always the techniques that are going to take precedence over ideas.

You can see that members of a team are pair programming. You can monitor the amount of tests that developers are writing. You can see requirements listed in story form. But how do you monitor Agile's ideas?

How do you tell if your team is motivated? How do you tell if you team is self-organised? How do you tell if your team is keeping a constant pace? These things are much harder to keep track of, but they stand at the heart of what pure Agile is about.

And it's this that leads some people to believe that Agile is a failure. In many ways it is, and I won't sit here and defend some implementations of it, but at its heart the set principles are sound.

But it's not for everyone. For Agile to make an impact you need a good team. A trusted team with experience and who aren't too jaded to try new things. A team that communicates, and where mistakes aren't chastised but taken in as just another problem to solve.

Communication and team work are key to making a success with Agile. It takes a good team to pull it off, I admit that. Some teams won't thrive under its ideals, but when you put the right people together it gives them all the help they need.

Just don't get too focused on the techniques you read about.

Sunday, November 16, 2008

JQuery - How Javascript should be

I go for months without posting and then drop 2 in one night...

Let me first get something straight: I'm a focused lazy developer. If there's something already written that I can use to solve a problem I'll use it. As long as it's half decent and does what I need. But this has a flip side in that I code my own stuff with reusability in mind. Why spend hours solving a problem only to revisit it in another project a month down the line?

So I must say that for a long time I didn't really get Javascript. There didn't seem to be many opportunities for code reuse: each animation or validation check I did seemed wholly connected to one particular purpose. I struggled to generalise my solutions to enough of a degree so that they could be reused.

The impact of this was that I shied away from Javascript whenever I could: so when Microsoft came along with the AJAX Toolkit I was in heaven. Here was a collection of cool controls I could just drop onto a form and get a popup div, or a water mark on a text box, or a whole bunch of other stuff.

But at the back of my mind I couldn't help thinking this was all overkill. If I want a div to popup when I click a link, surely all I need is a CSS style on it, and swap the display style from javascript? So why have that as a control? Simplicity of implementation. It's easy. The lazy part of me liked that.

The focused part didn't. The focused part wanted to come up with a library that contained a load of these common functions so I didn't have to re-invent them on every project.

JQuery does just that, and more. I won't pollute the web with yet another JQuery tutorial (go see Rick Strahl if you want one) but I will say this. If you're a developer who likes to separate their concerns, take a look.

We're all familiar with layering applications: View-Model-Controller, Interface-Business-Data access-- it all amounts to the same thing. We're separating each element of the application so that we have structure. Structure eases maintainability. It removes bad smells. It increases orthogonality. For want of a better term--It Just Feels Right.

So why stick Javascript code in HTML? Isn't that polluting the structure of the document with behaviour?

Linking to a CSS file allows us to separate styling information from our HTML (or ASPX if you're into ASP.Net like me), and we all know you can link to Javascript files in the same way. If you're anything like I was though, all those Javascript files will contain is common "utility" functions. You'll still have "onclick" attributes in your HTML calling Javascript methods.

JQuery allows you to get rid of these onclick calls. Here's what I do:
  • Have a JS file per page in your application
  • Link to this JS file in your page as you normally link to script files
  • Do not include any script in your page what so ever
  • In the page's JS file, make use of the $(document).ready call to wire up your event handlers
For example: you have a floating div ("helpDiv") where you want to toggle the visibility based the click of a link ("lnkToggleHelp"). I used to add an onlick attribute to the anchor tag calling a Javascript function that would toggle the visibility.

In JQuery you can do this in your page's JS file:
$("#lnkToggleHelp").click(function(event){
event.preventDefault();
$("#helpDiv").toggle();
});
Which separates the behaviour of the page from the structure: leaving you with 3 separate but linked strands of a page: Style (CSS), Structure (HTML), and Behaviour (JS files incorporating JQuery). To me it's a whole lot neater.

That and all the effects and manipulation that you can do with JQuery means I can't see myself writing Javascript in the future without using JQuery.

Building a PC - easier than I thought

Last week I ordered a Shuttle G5 from icubes.co.uk along with a processor, memory, HDDs, and a DVDRW drive. I've never assembled a PC of my own before, but I've installed enough cards and memory to know enough about what makes one work--so I reckoned I knew enough to take the plunge and go for it.

And I must say it was relatively easy. There were a few stumbling blocks, like I'd ordered 2 SATA HDDs and a SATA DVDRW drive and the case only came with 1 SATA cable, but I salvaged cables from my old PC so I didn't have to take a trip to Maplins yesterday afternoon. Only other problem was smearing the thermal paste onto the processor. I've fitted procs once before but that came with a template that you stuck onto the bottom of the heatsink and filled. There wasn't anyhting like that so I just used a piece of an old business card that seemed flexible enough.

Must be working though because it hasn't over heated yet and its been on all day. Very quiet too, although my last PC sounded like a jet engine because of all the fans in it. I like the compactness of it too--it's smaller than the subwoofer that I had with my old PC.

I was going to post a step by step guide to how I assembled it, but there doesn't really seem to be much point. The case comes with instructions on what to plug in when, and if you've got all the cables you need you're fine.

Only things I would say if you've looking to do the same is:
  • Make sure all the parts will fit into the chassis you're ordering
  • Don't use too much thermal paste as it can go everywhere if you put too much on
  • Buy extra SATA cables if you've chosen more than 1 SATA drive
  • If you're not using IDE drives, you can take out the IDE cable to make room
  • Your processor may come with a heatsink and fan, but the Shuttle includes one, so use it because it's quieter
  • If you go with the G5 make sure you mount the DVD drive close enough to the front so the button opens the tray (you'll understand when you get there ;))
But apart from that, it's not too difficult. I wouldn't recommend it to an absolute novice as there's some things that aren't covered in the instructions, but if you know how PCs work you should be fine. And you'll end up with a pretty PC at the end of it.

Wednesday, July 02, 2008

An Open Letter to M1 J26-J28 Drivers

Work is currently under way to widen the motorway between these two junctions, both north and southbound. As part of this work a 50mph speed restriction is in place due to narrow lanes that allow the work to take place.

Could I please remind everyone (or inform, if you haven't realised already) that the outside lane is much narrower than the other 2 and to keep your vehicle in that lane if you choose to drive in it?

I have lost count of the number of vehicles (well, cars mostly as van drivers seem to be aware of the road they are taking up) that have nearly hit me from drifting towards the middle lane. That stretch of road is congested enough during rush hour: for God's sake don't add to it by causing a pile up.

So please folks, stop fiddling with your sat nav. Stop farting about with your car's onboard computer. Put the phone down and stop nattering into your hands free kit. Stop trying to read a god damned book while doing 50mph in a narrow busy lane (yes, I have seen someone doing this). Stop doing anything that takes your attention away from staying in the lane. It's not difficult.

Saturday, June 21, 2008

Untitled

Friday night/Saturday morning, and I'm killing time until I'm tired enough to sleep. The cat on my lap has his white paw resting in an empty shot glass that did have a neat JD in about half an hour ago. Got the house to myself but there's nothing on TV worth watching so I'm doing what my girlfriend hates me doing. Flicking through the music channels until I find a song I like, then flicking around again when it finishes.

Whatever happened to Sade? Can't say I ever liked that kind of music but it'll always scream mid 90s to me. My sister seemed to loop that CD when I was in secondary school and it looks like it just stuck.

Same with The Bangles. I remember a girl in primary school singing Eternal Flame (think her name was Carla) and the teacher (Mrs Bettison?) saying she wanted to be sent tickets to her concert when she was famous. That was probably 20 years ago now but that song triggered that memory off as well any smell could.

No Doubt's Don't Speak is on now, and I'm instantly at school again doing my GCSEs wondering what my life will hold and why Hayley chose to wear a black bra under a white top for the last exam of the year. But as soon as it hits it's gone again, and I'm dragged back to now with a cat padding my leg and me wondering where any of this is even going.

I don't really think it's going anywhere actually. Probably shouldn't be writing blogs at gone midnight after half a bottle of wine and JD but there you go.

Oh, Black Hawk Down's on

Friday, June 20, 2008

Creating Mock Web Services in .Net

So, a situation arises where your code needs to make a Web Services call out to a different system. Chances are you'll go through the usual stages of adding a Web Reference to your project in Visual Studio, and then use the generated code to make your call. Simple, right?

Well, yes, and Visual Studio goes out of its way to simplify the creation of this client code, so you can get on with calling the service rather than concern yourself with the plumbing that's required.

But what if the service you are calling charges you for every call you make? What if you can't guarantee that you will be online during the development of your code? What if that service is currently under development and you don't know whether it will be available when you are doing your testing?

If any of these apply you need to remove the external call to the web service from your code. In a unit test scenario you would probably use something like NMock and architect your code to use a dependency injection pattern: injecting an NMock created object that matched an expected interface.

In other cases what you can do is create a mock Web Service. One that matches the interface of the live one, but which you control.

You can easily create interfaces that match a WSDL file by using the WSDL.exe program. I have a sneaky suspicion that this tool is used to create the client code from Visual Studio when you add a new Web Reference, but it can also perform the other way around.

First, obtain the WSDL of the Web Service you want to mock up. (If you are calling a .Net Web Service, then the path will probably end .asmx. To get the WSDL, just append ?wsdl to the path and you'll get the WSDL XML). Save this to your local machine.

Open up the Visual Studio Command Prompt and type wsdl.exe. You should get a heap of text explaining the command line switches, but if you don't your paths aren't mapped correctly. The wsdl.exe file should be somewhere on your machine though :)

Once you've found the tool, type

wsdl /language:CS /namespace:Your.Namespace.Here /out:Directory\To\Save\To\ /protocol:SOAP /serverinterface finally-your-wsdl-file-here.wsdl

There are other options too if you need them, like setting proxy username and passwords, but the one shown is what I've used and it works fine.

This will create a .cs file in the /out directory that contains a number of classes that match the object definitions in the WSDL. It will also contain one interface that you must implement in order to complete your mock Web Service.

To do this, create a new ASP.Net Web Application. (I guess you could create a Web Site, but I've not tried that as I don't like them). Add the code file you generated above to the project, then add a new Web Service.

In the code behind for this Web Service, change the class definition so that it implements the interface in the generated code. Visual Studio should help you out by generating method stubs so the class matches the interface.

And there you have it, place code in the method stubs to do what you want.

You now have your mock web service. Run the application to make sure that it works, and then make a note of the address of the new mock service. Enter this as the URL of the web service you want to call in your App/Web.config file in place of the live one, and your application should now call your mock instead of the live one.

A little tip though. I've noticed that (at least Visual Studio 2008) doesn't like attaching to 2 IIS processes for debugging. If you are calling your mock service from an ASP.Net application and you want to debug both, you'll need to start one of the applications up in the Visual Studio Development Server, instead of running under IIS. If you do this, make sure you assign an explicit port value rather than an auto generated one, otherwise your calling code won't be able to call it :)

Monday, May 26, 2008

Refactoring

I've currently got a post on ice about using CLR functions in SQL Server 2005 using LINQ (don't worry Paul, I've not forgotten), but while I mull that over I'll just make a short point about refactoring code.

Regular readers (and Google Analytics tells me that I do have some, at least), might have picked up on a few complaints I've had about VB.Net. Personally I wish I'd never have to use it, but sometimes rewriting an entire app in C# isn't cost effective so... needs must, and all that.

Well, I was recently in the midst of a ton of VB.Net code, thinking "I could do with some IDE refactoring support here," after needing to extract a code block into its own method. Except I can't find any such support in Visual Studio 2008 for such an option.

Using C# in Visual Studio (2005 and 2008), we have a neat little context menu option called Refactoring, and while it doesn't offer the depth of features that something like Resharper offers, it at least allows you to Extract Method, which was the refactoring technique I needed to use.

Long story short, I found this post from the VB.Net Technical Lead:
I’d never even heard of refactoring until C# added the feature to their IDE. I’ve never bought a copy of, much less read, Refactoring: Improving the Design of Existing Code.

I let out an audible sigh after reading that. In a way it explains a great deal about VB.Net.

Refactoring should be something that every professional developer knows about and practices every day of their working lives. It is an integral part of the development cycle, and one which even graduates should at least be aware of.

If we do not refactor we end up repeating code needlessly. Although this might produce a working application that passes all current tests, put yourself 6 months, or 6 years down the line where you have to add a new database table, or a column to a table, or a new page to your web application.

If you've essentially been copy and pasting code, how many places in the code base do you have to change to add that small piece of new functionality? Answer: unknown, especially if you weren't involved in the initial project. How do you know that you've caught all the places without executing every branch of the application?

Answer: you don't, not until you get irate customers yelling at you, and your managers asking why such a small change resulted in the entire app breaking.

And it's such a simple thing that I have difficultly believing that any developer worth their salt wouldn't see the benefits that it gives. So if you don't know, then read up and get using it. Your code can only get better.

Wednesday, May 07, 2008

IEnumerable to DataSet Extension Method

I recently had a need to create a DataSet from a List<T>, and found this post that did just that. I was happily using that in C# code, but then had a requirement to use it from a VB.Net app.

I'm not even going to attempt to hide my contempt for VB, and one of the things that I quickly tire of is typing the exact same thing multiple times. The Intellisense in VB also doesn't seem as smart as it is in C#, mainly it doesn't help as much when instantiating objects.

So I wanted a simpler way of calling the code that I had from the post above, and I immediately thought of extension methods. If I could create an extension method that made a DataSet from an IEnumerable then I could call that from the VB app with a minimal of fuss.

Kudos must go to Keith Elder for his original code, but if you want it in extension method form, then here it is:

public static class CollectionExtensions
{
public static DataSet ToDataSet<T>(this
IEnumerable<T> collection, string dataTableName)
{
if (collection == null)
{
throw new ArgumentNullException("collection");
}

if (string.IsNullOrEmpty(dataTableName))
{
throw new ArgumentNullException("dataTableName");
}

DataSet data = new DataSet("NewDataSet");
data.Tables.Add(FillDataTable(dataTableName, collection));
return data;
}

private static DataTable FillDataTable<T>(string tableName,
IEnumerable<T> collection)
{
PropertyInfo[] properties = typeof(T).GetProperties();

DataTable dt = CreateDataTable<T>(tableName,
collection, properties);

IEnumerator<T> enumerator = collection.GetEnumerator();
while (enumerator.MoveNext())
{
dt.Rows.Add(FillDataRow<T>(dt.NewRow(),
enumerator.Current, properties));
}

return dt;
}

private static DataRow FillDataRow<T>(DataRow dataRow,
T item, PropertyInfo[] properties)
{
foreach (PropertyInfo property in properties)
{
dataRow[property.Name.ToString()] = property.GetValue(item, null);
}

return dataRow;
}

private static DataTable CreateDataTable<T>(string tableName,
IEnumerable<T> collection, PropertyInfo[] properties)
{
DataTable dt = new DataTable(tableName);

foreach (PropertyInfo property in properties)
{
dt.Columns.Add(property.Name.ToString());
}

return dt;
}
}

It creates a DataSet with one table that has the name you pass in. In my case I didn't need to name the DataSet explicitly so just used a constant, but the code above could easily be updated to pass in a DataSet name if you need it.

Now you should be able to call ToDataSet on any object that implements the IEnumerable interface.

Tuesday, April 22, 2008

C# Extension Methods Part 2: Extending Log4Net

OK, so after last time's brief introduction I'll share with you a Log4Net extension that I've made. It's nothing really too complicated but I hope it will highlight what I feel are the advantages of this technique.

So, on to business. If you're like me you'll like your logging code. I think logs are wonderful, if only to check that your code is doing what you think it is. Of course, there's always the chance that you'll have too much logging, but that's what log levels are for, right? ;)

The following is a common pattern I use for logging at pretty much every level (I've shown Debug here):

int someValue = 3;
if (log.IsDebugEnabled)
{
log.DebugFormat("This is a value: {0}", someValue);
}

Here I have a value I want to log, so I check that we should log and then log out the value. In this case I guess the check is superfluous, as the statement won't be output if we're not logging at debug anyway, but I like to get into the habit of checking so that when I need to log something like this:

if (log.IsDebugEnabled)
{
StringBuilder builder = new StringBuilder();
for (int i = 0; i < 100; i++)
{
builder.Append(GetSomeLogStatement(i));
}

log.Debug(builder.ToString());
}

I only go into the long running loop if I'm going to get some logging output from it.

When debugging code I like to have log statements that show where the execution path is going. The easiest way of doing that is to output a log statement when you enter a public method, but if you're doing this a lot then that's a lot of code being reproduced.

That's when I thought about making an extension method to handle all this for me. Here's the code:

public static void LogMethodParameters(this ILog logger, Level level,
string methodName, params object[] parameters)
{
try
{
if (logger == null)
{
throw new ArgumentNullException("logger");
}

if (level == null)
{
throw new ArgumentNullException("level");
}

if (methodName == null)
{
throw new ArgumentNullException("methodName");
}

if (logger.Logger.IsEnabledFor(level))
{
StringBuilder builder = new StringBuilder();
builder.AppendFormat("Method: [{0}]. ", methodName);
if (parameters != null && parameters.Length > 0)
{
// parameters.Length is at least 1
builder.AppendFormat("Parameters: (p0: [{0}]", GetParameterDisplayValue(parameters[0]));
for (int i = 1; i < parameters.Length; i++)
{
builder.AppendFormat(", p{0}: [{1}]", i, GetParameterDisplayValue(parameters[i]));
}
builder.Append(")");
}

logger.Logger.Log(null, level, builder.ToString(), null);
}
}
catch (Exception ex)
{
log4net.Util.LogLog.Error("Exception while logging exception", ex);
}
}

private static object GetParameterDisplayValue(object param)
{
return param != null ? param : "(null)";
}

public static void DebugMethodParameters(this ILog logger,
string methodName, params object[] parameters)
{
LogMethodParameters(logger, Level.Debug, methodName, parameters);
}

Like I say, it's nothing too complicated, but it allows you to include this line at the top of each of your public methods:

log.DebugMethodParameters("Method", 1, 2, 3);

And you don't need to worry about anything else, your log file will show that you entered the method with those parameters.

What I wanted to do was use reflection to figure out what the current method was and what the parameter values are, but this was just a little side task I had to do while in the middle of something else.

Here are some additions I could make to this method:
  • Use reflection to get the method name and parameter names
  • Incorporate the Visual Studio 2008 ObjectDumper sample to drill down into complex objects to provide a fuller picture of non-primitive types
  • Allow custom formatting of the log output
This is just the start really. Check out extensionmethod.net for some more samples of what else you can do. (Hmm, that site seems to be having a few problems at the moment, but check back later and browse what they have).

Saturday, April 19, 2008

C# Extension Methods Part 1: Introduction

Extension methods are a new feature of C# 3.0 with .Net 3.5. They allow you to extend existing classes with new methods without having to create a whole new class that inherits from the class you want to add the method to. First off, why would you want to do this?

Well, as an example LINQ makes heavy use of extension methods to provide the functionality to create new expressions easily. A LINQ extension method might extend IQueryable to apply a custom where clause, for example. In fact, as part of the Visual Studio 2008 samples you'll find a set of extension methods that does just that.

So what does an extension method look like? Here's a very simple example:

public static string Reverse(this string str)
{
if (str == null)
{
throw new ArgumentNullException("str");
}

StringBuilder builder = new StringBuilder();
for (int i = str.Length - 1; i >= 0; i--)
{
builder.Append(str[i]);
}

return builder.ToString();

}

Here we have a static class StringExtensionMethods that contains a Reverse method, which reverses the order of the characters in a string.

Notice that the parameter being passed into the Reverse method has the "this" keyword preceding it. This tells the compiler that this is an extension method for the string class.


You can now use this method on any string object in any class that uses the namespace that this static class is in. We also get full Intellisense support in Visual Studio 2008:

reverseCodeGrab


There are a few things to be aware of though. Consider this piece of code:

string thisIsNull = null;
Console.Write(thisIsNull.Reverse());

If Reverse was a standard method on the string object, then we would get a NullReferenceException being thrown when we try to call the method.

However, an extension method on a null object will still be called. It is the responsibility of the extension method to check for null values.

The extension method is pretty much just syntactic sugar around a standard static "helper" style method. So when the method is called it doesn't actually need to be part of an instance.

For this reason, make sure you should still check that the parameter is not null before using it, as you would in a normal method.
As a quick aside, this is a perfectly functional extension method:

public static bool IsNull(this object obj)
{
return obj == null;
}

It could then be called like this:

string thisIsNull = null;
if (thisIsNull.IsNull())
{
Console.Write("Was null");
}

So we can create extension methods that easily provide some additional functionality to a class. Next time, I'll show how I've implemented a simple extension method to Log4Net.

Wednesday, April 02, 2008

Beginning Lambda Expressions in C#







I think I've been using LINQ to SQL for about a week now, and one of the (many) things to conquer on that particular learning curve are lambda expressions.

Lambda expressions are a new feature of C# 3.0 using .Net 3.5, and look something like this:
t => t.Contains("hello")
When starting out this expression can look very strange, but once you get your head around it they are pretty simple... well, in most cases.

Lambda expressions are like anonymous delegates, they have input parameters, and an output result, and the new => syntax ties them both together. What the example above means is that there is a parameter "t" (in this case a string), and it returns a bool (the return type of the string.Contains method).

But how do we know what these types are? They can be inferred from the type declaration. The above example would be meaningless on it's own, but when put together like this:
// delegate that takes a string and returns a bool
public delegate bool CheckString(string arg);

// create an instance of the delegate using a lambda expression
CheckString newDelegate = s => s.Contains("hello");

// use the delegate to show the expression works
Console.WriteLine("Value: {0}", newDelegate("hello world"));
We can see that the types are inferred from the fact we are creating a CheckString delegate. The delegate would return true when called with this string.

So how does this help us with LINQ? Well, in LINQ queries you'll often see that you need to specify parameters that have types that contain something like this:
Func<T0, TR>
And what the hell does that mean? Well, as part of the framework we have the following generic delegates already defined for us:
public delegate TR Func<TR>();
public delegate TR Func<T0, TR>(T0 a0);
public delegate TR Func<T0, T1, TR>(T0 a0, T1 a1);
public delegate TR Func<T0, T1, T2, TR>(T0 a0, T1 a1, T2 a2);
public delegate TR Func<T0, T1, T2, T3, TR>(T0 a0, T1 a1, T2 a2, T3 a3);
These delegates simply say that given the types specified, the delegate should return a type. In an abstract way, that's all a delegate is really: so don't worry that it can look a bit crazy.

Using this I could rewrite the previous example like this:
// create a lambda expression
Func<string, bool> newDelegate = s => s.Contains("hello");

// use the delegate to show the expression works
Console.WriteLine("Value: {0}", newDelegate("hello Mr Coupland"));
I don't have to define a CheckString delegate any more, and can just use the inbuilt generics to specify what the delegate interface is.

So what does this all mean? Well, if you need to pass in delegates to a method call, instead of having to define the delegate's interface, then structuring an anonymous delegate to match it, you can just use one of the Func delegates and a lambda expression, like in this really contrived example:
// create a boring method that doesn't do much
public bool StringConditionCheck(string str, Func<string, bool> exp)
{
return exp(str);
}

// call the method from somewhere else
public void MyOtherMethod()
{
Console.WriteLine(StringConditionCheck("sjdl", s => s.StartsWith("s")));
}
That example's probably a little too simple, but it shows what you can do with a lambda expression passed in to another method.

This is nothing new in itself, you can do all of that with anonymous delegates in .Net 2.0 (and with normal delegates before that), but now there's less code to write.

And when you can create delegates so effortlessly, it makes creating generic Expressions for our LINQ expression trees really simple, but that's for another post.

Monday, March 31, 2008

High memory usage and slow load times in .Net 2.0 apps

Stumbled across news of what was fixed in the .Net 2.0 SP1 update last week. Things of note:

And some more interesting ones that may be useful:
Make sure you check out the full list of fixes to see what might affect your systems.

Tuesday, March 25, 2008

More on goto in C#

Akidan's posted a comment back on my old "goto" post from October last year, and it's got me thinking about it again. The original post was prompted by seeing gotos in .Net Framework code using Reflector, but what we were actually seeing was an optimised version of the code.

The real question comes down to code readability. A high level language like C# has constructs that make a goto statement superfluous, and a compiler which is able to recognise when a goto can replace some branching code. So from the perspective of writing high performance code, we don't need to worry: we can let the compiler figure out how best to structure the internal workings for us.

And here is the crux of the matter. If we don't need to worry so much about execution speed, then surely we should write code with an eye towards maintainability?

Now, I'm not saying that gotos make code inherently unmaintainable or unreadable: I'm willing to accept that there could be situations where they can be used, but only then used with caution.

I still stand by my belief that gotos should be avoided in code that you write yourself. Using Akidan's two methods as an example: the non-goto one is certainly more readable. It's clear that the "if" is comprised of 2 checks that results in the same outcome, and having them together like that makes it difficult to separate them later on by mistake.

Thursday, March 20, 2008

The customer isn't always right

Top 5 reasons why “The customer is Always Right” is wrong [mirror here]

Using the slogan “The customer is always right” abusive customers can demand just about anything - they’re right by definition, aren’t they? This makes the employees’ job that much harder, when trying to rein them in.

Also, it means that abusive people get better treatment and conditions than nice people. That always seemed wrong to me, and it makes much more sense to be nice to the nice customers to keep them coming back.

Tuesday, March 18, 2008

Enabling Java applets in Firefox on Ubuntu







First problem I've encountered with Ubuntu was trying to use Facebook's photo uploader from Firefox. It all started out well: I went to the the page and was told I needed to install Java, and Ubuntu presented me with a list.

Here is where I made my first mistake, I installed the first one on the list, which wasn't Sun's JRE. I can't remember the name of the one I installed, but after sitting through the installation I got back to Facebook and had a dialog pop up saying the GNU classpath had not been set, or something along those lines anyway.

So I went to verify my Java plugin, and got a message saying I wasn't running the latest version, and provided links to download the latest. Not really knowing what an RPM was, I chose the "Linux (self extracting file)" option, and consulted the instructions.

Second problem came now. The first instruction was to SU to root, which required a password, and I'm pretty sure I never set a root password when I installed.

But no problem, I just chose System - Administration - Users and Groups from the top menu in Ubuntu, selected the root user and assigned a password. I then carried on following Sun's instructions to install Java, which were simple enough.

Then came the section Enable and Configure, in which I had to tell Firefox to use this new JRE for applets.

Neat little tip that I didn't know about in Firefox is that if you type about:plugins in the address bar you're given a list of the installed plugins, which I was able to use to remove the old plugin that didn't work.

The plugins are just symbolic links in the firefox/plugins directory, so enabling one plugin means creating a new link in there, and removing the other effectively uninstalls the old one.

Job done, Facebook's photo uploader (and I assume other Java applets) now works properly.

XNA 3D Game - CaveIn

Stumbled across a game called CaveIn over at Mykres Space. It's a top down 3rd person game where you have to rescue people from a mine, and solve puzzles along the way.

Check out the video for it. Looks nice.

Monday, March 17, 2008

Well I find them funny anyway....





Click to enlarge.

Or go straight to the source.

Need 1 second of surprise? How about a stuffed toy?

Not entirely sure I'm convinced this is serious, but I've just seen an episode of Future Weapons which featured the Kitty Cornershot.

Check out the video and see for yourself.

Wednesday, March 12, 2008

Douglas Coupland's jPod

I've been a huge fan of Douglas Coupland's books since reading Microserfs and Generation X when I was at college, right up to his new one The Gum Thief which I'm currently reading at the moment.

His book before that one was called jPod, which was billed as "Microserfs for the Google generation", which I enjoyed as much as any of his other books.

I was surprised to stumble across news a few months ago that jPod was being made into a TV series, and checked out the trailer to see what they'd done. And I must admit that I wasn't too impressed. Maybe it was just the usual feeling you have when you find out that someone else's interpretation of a piece isn't the same as your own, but in this case it was something more as well.

I've always found his work utterly engrossing, and I don't agree with some reviewers who say Coupland isn't a master of characterisation. For me, as a 20 something member of the IT rat race, I see his characters all around me (and I'm friends with some of them).

So it was with some disappointment that I finished watching the trailer, and I can't say that I was terribly bothered if it ever made it across the pond. I'd probably watch it if it did though.

But now I guess that's less likely to happen, as the news breaks that CBC has cancelled the show, and it will not be returning for a second season.

But already the fans have staged a protest, urging people to contact CBC to register their disgust.

Will this work? I doubt it, but there is precedent though: Futurama and Family Guy were both cancelled by Fox, only to reappear as straight to DVD after a backlash from fans. So maybe there is hope for the people wishing to see it return.

Me? I'm still waiting for the second season for Heroes to restart...

Tuesday, March 11, 2008

Using Ubuntu

I must say I do like all the nice animations and flourishes that Ubuntu gives to all the windows. They seem to float, fade and snap to position a lot smoother than I've seen on Windows. I particularly like how they seem to stretch and flop into place when you maximise to full screen.

I've not had many problems either. Today I've been doing a lot of admin stuff setting myself up to start contracting in a couple of weeks: just the usual sstuff of writing letters, sending emails, copying files to my USB stick, editing images; and I've been fine.

Makes me think back to reading a comment on this Digg article, which said:
I'll use Open Office when it stops looking and running like Word 97

But he's being overly harsh. I've only used the word processor for a couple of letters and was quite pleased with the whole experience. It lets you write documents, checks spelling as you type, auto corrects typos, format text and the like: what more can you need?

How many features of Office do you actually use day-to-day? Do you really need to fork out 200+ quid to have those when you can do most things using free software?

Only thing I will say though, is that the title bar of windows I have maximised occasionally disappears when I hover over it. I'm not sure if this is a feature though, to stop you clicking the bar when you mean to click the menu along the top. But it's a minor annoyance at best, and I've enjoyed the experience so far.

Monday, March 10, 2008

Installing Ubuntu

So I've decided to take the plunge and dual boot my desktop PC with XP and Ubuntu. I've played around with the LiveCD version for about an hour, so I know what to expect.

I'm following these instructions, but I needn't have bothered as the setup wizard is pretty self explanatory. The only thing I needed to double (and triple check) was the disk drive I'm installing to.

I have 2 physical hard drives, one with Windows (C:) and one that I just used for data (D:). Preparing for installing Ubuntu I copied everything I needed from D: to an external drive. But knowing that the C: and D: names were Windows specific I checked in the LiveCD environment what names Ubuntu gives the drives.

The installer's currently at 70% as I write this on my laptop, so I hope that my D: drive really was called /media/sdb1, or else I've just buggered my machine...

It should be fine though... I think.... C: was the master drive, and D: the slave, so it made sense that I would have an sda1 and an sdb1. I hope anyway.

82% now, currently Configuring apt, whatever that means.

88%, Importing documents and settings... strange, as I told the wizard I didn't want to do that...

94%, Configuring hardware... I guess if it's all going to fall over in a heap then it'll probably do it now...

Done. Although I'm sure it said it was removing something before that window popped up. Not sure what. Restart now, I guess now I find out if I've screwed up...

Well I've got a boot menu at least, 3 different instances of Ubuntu (generic, recovery, and memtest86+) and Windows XP! Cool, I'll boot back to Windows to make sure I've not messed up. Yay! Still there, and I can no longer see my D: drive, which is as expected if the partition table has changed.

Now for the proper test, will it boot to Ubuntu?

Well it's asking for a username, and the display looks a hell of a lot better than in LiveCD mode. Very quick to get to this stage too.

Apparently I need to enable some "restricted drivers", as my graphics card needs a driver from Nvidia, which isn't "free software" as the Ubuntu people can't look at the source code. That driver's downloading now. And it installed itself too, that was easy enough. But I need to restart. Ok, restart.

That was very quick there, although I guess all operating systems are when they're first installed.

Right, so what're these software updates it's telling me about? Apparently there are 196 updates to apply, a total of 245.4MB. Might kick that off tomorrow. Oh, neat little window animation there when I closed it.

So, right: looks like that's installed then. Easy enough, if a bit nail biting at times. Only took about half an hour as well. Although I'm not sure about that brown swirl of a desktop background, have to get rid of that. I'll have a play and see how I get on.

Vista vs Linux

Neil's commented on seeing a Vista Vs Linux video on YouTube. Check it out:



Direct Link Here

Back to XNA - Programming Considerations

I've got 2 weeks until I start contracting for E.ON so thought I'd get back to playing with XNA. Well, actually that's a lie. I started out this morning on Call of Duty 4, but kept dying on All Ghillied Up so then I decided to start playing with XNA.

I've signed up for the XNA Creators Club and tried launching the Space Wars starter kit project on my Xbox. Took a while to deploy all the content, but then again I am going over a wireless connection. I was impressed though, and started to think about what kind of game I could try to make.

Whatever I choose, I thought, I need to understand how the GamePad class works. So I started with a simple game that just output the state of the pad as a string to the screen. And it's here where I found my first problem. The screen on a television is very different to the monitor I was using for the Windows games I've made previously.

Firstly, the TV's refresh rate is lower than my monitor's so you can see slight flickering when you have a bright screen. Pretty sure that will give me a headache if I look at it too much, so my Xbox games will need a darker background than on PC.

But probably more important is that TV screens can trim off part of the display, so you don't get the full screen to play with. Xbox 360 Programming Considerations explains this, and other things to be aware of when making games.

So I'll give it some thought, and figure out what type of game I want to make.

Thursday, March 06, 2008

Ubuntu - First Impressions

I downloaded the latest ISO image of Ubuntu a few days ago, and tonight I finally got around to burning it to a CD so I could actually use it.

For those of you that don't know, Ubuntu is a Linux distribution focusing on usability. I guess the aim is to tempt people across from their Windows PCs by giving them something that's as simple to use but free and more secure.

Anyway, my limited Linux exposure was about 6 or 7 years ago with Red Hat and SUSE, and although they were relatively easy to get to grips with I didn't think they were ready for the masses.

Fast forward a few years to Ubuntu, and you get an OS that you don't even need to install if you only want to try it out. With LiveCD you just run the entire thing from CD so you can try before you commit to installing it. Old news to some people I know, but the whole "try before you buy" thing is certainly impressive when you're talking about something as complex as an operating system.

So that's what I've been doing tonight, playing with the LiveCD version to see what it's like. I'm impressed enough to want to install it put it that way. I've got a second hard drive and intend to dual boot Ubuntu and XP, the documentation looks pretty straightforward, but I'll write more when the time comes.

Wednesday, March 05, 2008

New Life, New Blog

Tomorrow marks my last day as an employee of Esendex, and as my previous blog was set up solely to demonstrate what work I was doing there I thought I should go back to this personal one and reinstate it.

So, as I continue with my life and career I'll be adding new posts here along the way. I shouldn't think I'll be maintaining my old one, although I'll leave it there for reference as I gather there are links to some posts that some people have found useful.