Now we just have to use this file, add a couple of extra fields (Attend/Skip, Notes, etc) and we’re set. And you know what? We could load the XML into a dataset and process it completely with ADO.NET.
Uh, kinda. First of all, to load the XML into a dataset we need to generate an XML Schema from the data file. This particular file generates a weird schema that’s not really that intuitive to map into a dataset. It’s doable, but not easily. Furthemore, loading that much XML into a dataset with the .NET Compact Framework is heavy. I mean, really heavy.
After a couple of experiments with a stopwatch I decided to find out an alternative method.
Well, the data is in XML, .NET is the weapon of choice to manipulate XML and manage it as data, so why not load the file into an XmlDocument and let System.Xml work its magic?
After quite some rewiring of the internal data representation, I was ready to test load times. Twice as fast compared to DataSet.LoadXML, but still a bit slow. On my PocketPC it takes 24 seconds to get from tapping the filename in File Explorer to be in a usable state on the first screen. Oh well, everything happens in XmlDocument.Load, so there’s not a lot I can do about it. The XML file with all the sessions data is 221Kb, after all.
Technically I could prune it from speaker’s biographies and reduce size and load time somehow, but the idea was to use the file without modification, so updated versions can be deployed with just a copy.
But the real problem in the XmlDocument approach was not the load times or the memory usage. To reduce the footprint of the .NET Compact Framework, the designers at Microsoft had to leave out some features. One that got left on the chopping block is XPath and all its related ilk.
Not really useful to keep the in-memory data structure as an XmlDocument if I have to write all the methods to access it, eh?