Saturday, January 9, 2010

The split between industry practice and academic research

Herb Sutter and Bjarne Stroustrup have some interesting comments on What Should We Teach New Software Developers? Why?.

I found this of particular interest:

Another CS professor: "I never code."

Another industrial manager: "We don't hire CS graduates; it's easier to teach a physicist to program than to teach a CS graduate physics."

I have personally had to teach two different freshly minted CS graduates how to code for embedded systems. They could write a compiler but they did not know the meaning of the basic C bit-wise operators, or Boolean Logic. To their personal credit they both picked up the concepts very quickly. To the schools system's credit I can give none. Personally I'd rather hire a Ham Radio operator that has the knack for tinkering and a real desire to learn, than someone that has been molded by the conventional academic system.

In Issue 7, January 2010 of PragPub there is an article Against SEMAT — Is this software development call for action necessary? asking why we need a new organization like Software Engineering Method and Theory (SEMAT), as author Jorge Aranda says "You say you want a revolution? We'd all love to see the plan."

I'd like to see that plan myself. Of particular interest to me is this in the SEMAT mission statement to addresses The split between industry practice and academic research. This being the Software Safety blog I spend a lot of time reading the academic papers, which often talk of great advances in things like object oriented languages to create bug free software.

The problem is coming from the other end of the spectrum, I spend my days designing Embedded Systems, and I can tell you that few to none of those academic papers are written with resource constrained systems in mind. My typical part is an Atmel AVR, that has 8K of Flash and 1K of RAM total.

Academia, with few exceptions, assume that we can just use more powerful parts, while the reality from Management is that we can't. If we can save a penny by using a 8K/1K part instead of 16K/2K part then that is what is going to happen, unless you are make very few units. For example from a past blog entry at my hardware site:

[T]hat takes four resistors per board, on a board that already did not have enough space. Also at 50,000 units per year, with an design lifetime of five years, that is 10,000,000 resistors. After a while these resistors start to add up to real money, for what is a single event at manufacturing time.

The same logic applies to using larger memory or faster parts, they cause costs to rise.

How do we get safe software from Academia that deals with real world economics of real hardware?