Automated Software Engineering Day 2

Well, day 2 is done and I have to say it was a bit of a disappointment when compared to the first day.  Unfortunately I believe that was in part due to my choice of sessions.  The best paper was in the session I did not attend and upon talking with the authors it sounds rather interesting.  There was one shining beacon, the day’s keynote.  The component session was not bad, but not great, while the experience reports were pretty much a disaster and did not really provide anything interesting.  Ohh well I guess, lets jump in and go over the highlights.

The keynote was one of the best talks I have been to in a long time.  It was about using swam intelligence to compute hard problems.  Basically trying to go around Moore’s law via parallelism.  The talk dealt with model checking, a computationally hard search problem.  It was concise and provided enough information to motivate the discussion, but not so much as to overwhelm the audience.

The speaker, Gerard J. Holzmann, talked about how increases in processor speed have slowed and how that strongly affected the model checking search algorithms.  Basically his point was that there was just not enough time to check the entire search space.  The solution was incredibly simple, just start with a whole swarm of processes and search from different areas, following different criteria.  There is no communication and lots of overlap occurs, but it is a lot better than using just one box, and since the number of available processors seems to be growing it is very cost effective.  I know I am not doing the talk justice and really simplifying the problem, but that’s the best summary I got.

The component-based systems session was not bad, but not great either.  The first paper proposed yet another framework for dynamic service discovery, the ideas were ones I’ve seen before and I don’t think it added much if anything new to this problem.

The next modeling talk I though was interesting, but not the way the speaker described it.  He presented a meta-meta-model to make the creation of meta-modeling languages easier, thus making domain-specific languages easier to create.  I think generalizing that idea in modeling would be pretty cool.  Modeling has a problem that there isn’t much reuse happening.  But if you build the whole thing in layers, it may work better.  So instead of saying things like meta-meta model build a whole layered system, then the reuse will happen at individual layers.  I think that’s where it is heading, but I haven’t seen anyone actually say so.

The last paper talked about programming environments for ad-hoc collaboration. I think it was an interesting idea, way better than the collaborating editor shown at TechCrunch50, but the interface seemed too busy to be useful.  I don’t think I would ever want to turn it on.  I hope it improves, but not great so far.

That was about it for day 2.  Some interesting stuff, but overall not as impressive as day 1, though the key note was rather good.

Reblog this post [with Zemanta]

Leave a Reply