Monday, June 28, 2010

Soldering Defect Database: How many ways can a solder joint fail? More than you might think.

The National Physical Laboratory (NPL), in the UK, has opened their Defect Database to the public. Of specific interest to us is the Soldering Defects Database.

When you go to the Soldering Defects Database, you are asked for an email address (not sure why?), once entered a set of pull-down menus is shown, allowing us to narrow the filed of defects that we want to search. If you want to see all of the defects simply hit "Submit Query", without selecting anything. This brings up a long page of pictures of the various types of defects.

NPL is also accepting submissions of solder defects not already covered.

An entry in the IPC Blog Introducing Defect of the Month from NPL and IPC: Toe Fillet Defects, introduces the new Defect of the Month video series; Gimpel Software's Bug of the Month comes to mind.

Steve Ciarcia, of Circuit Cellar Magazine, has been known to say that his favorite programming language was solder. Consider these defects to be the 'compiler errors' of the hardware realm.

Saturday, June 26, 2010

Safe and Secure Systems, Software Symposium - S5; Software survivability: where safety and security converge

This week (June 15th) wrapped up the 2010 Safe & Secure Systems & Software Symposium - S5 sponsored by the Air Force Research Laboratory Control Systems Development and Applications Branch (AFRL/RBCC), located at Wright-Patterson Air Force Base, purported home of Hangar 18.

The Air Force Research Laboratory Control performs basic research, exploratory development, and advanced development of flight control system technologies for highly reliable, fault tolerant vehicle stabilization, and flight path control. Develops concepts, components, criteria, analytical methods, and design tools for flight vehicle applications. Emphasizes development of fault tolerant control system architectures control automations, flight critical hardware and software, such as actuators, sensors and computational elements; and on-board and networked vehicle management systems. Integrates flight control with airframe, avionics, propulsion, utilities, mission, and vehicle flight path management. Conducts full spectrum of development and application activities ranging from initial concept definition through system mechanization, laboratory evaluation, and flight validation.

The AFRL/S5 event brought together industry, academia and government to collaborate on the common goal of improving the airworthiness and assurance certification process of future aerospace flight control systems with both incremental and revolutionary technological innovations in safety and security verification and validation (V&V) techniques that support maintaining cost and risk at acceptable levels.

The executive summary list for the S5 event includes:

  • Improving V&V for flight critical/safety and mission/information security
  • Designing for Airworthiness Certification:
  • Software for Complex Systems
  • Fundamentals for the Future

A good paper to start with, to get the flavor of the event, and to see the relevance to Embedded Systems, is: Software survivability: where safety and security converge by Karen Mercedes Goertzel and Booz Allen Hamilton.

The S5 agenda contains links to all of the presented papers (those are links at that site, but it is not obvious in all browsers).

The Goertzel/Hamilton paper puts particular emphasizes that few of us give much consideration, which is something that we all need to change, that is Threats and Attacks, on our systems. Threats and Attacks "Require human intention and intelligence in planning and execution", which stand in contrast to the normal Hazards that we do consider; the stuff of life that just happens, like lightning strikes, failed components, lead free tin-wiskers.

Today's Adversaries are:

  • Knowledgeable: They know more about our software than we do, including its vulnerabilities.
  • Skill and sophisticated: Not just "script kiddies". Attackers know how to exploit vulnerabilities, how to augment/assist direct attacks with social engineering and surreptitious malware (worms, Trojans, bots, spyware).
  • Quick: "Zero day" is the rule, with new attacks appearing before vulnerabilities are discovered by developers, let alone patched.
  • Motivated and well-resourced: Not just recreational hackers, but organized criminals, nation-state Info Warriors, Cyber Terrorists.

Our adversaries are not motivated by the Next Quarters Bottom Line, unlike our shorted sited corporate leaders. They are motivated by greed (hmm...that makes it sound like our corporate leaders are our adversaries doesn't it?) or the desire to do us harm. The Bad Guys usually have better resources and a single task to accomplish, unlike most Embedded System development groups, and deadlines that are not motivated by trade show schedules. The Bad Guys take advantage of the mantra of "There is never time to do it right now, but there will always be time to do it over someday in the future", that far to many organizations use as their design standard.

Goertzel/Hamilton mention (see Naval Sea Systems Command (NAVSEA)/Naval Ordnance Safety and Support Activity (NOSSA)) one of my pet-peeves "unnecessary functionality" as one of the "Hazards and risks that arise in software". The Creeping Feature Creature can be a powerful taskmaster, that leads to increased time to market and development costs that impact the bottom line, for features that will never be used by a customer, but seem "really cool" to management and developers.

Goertzel/Hamilton do give some recommendations on how to improve Software Safety and Security. Alas not of them seem practical in the corporate world due to time, budget, and size constraints.

Spend some time reading all of the other papers, to see where Safety Critical System development is headed.

Sunday, June 20, 2010

Safety and Security Considerations for Component-Based Engineering of Software-Intensive Systems; Engineering software for survivability intro.

In the document known as the AS is State Report [SIC] from the Navy Software Process Improvement Initiative (SPII), the Assistant Secretary of the Navy for Research, in 2007, stated that all systems are to be considered to be Software Intensive, unless a strong case can be made to the contrary. The Navy has been working with other branches of Government to develop plans related to Software Safety.

The Navy and obscure branch of the Department of Homeland Security known as Build Security In, a project of the Strategic Initiatives Branch of the National Cyber Security Division (NCSD), has released a new draft [June 11th/2010] of Safety and Security Considerations for Component-Based Engineering of Software-Intensive Systems.

The Naval Sea Systems Command (NAVSEA) Composition draft is based on the earlier Department of Defense, Joint Software Systems Safety Engineering Handbook, Draft Version 0.95, 2009.

Specifically, the paper discusses:

  • The types of anomalous, unsafe, and non-secure behaviors that can emerge when components interact in component-based systems;
  • Analysis and assessment techniques that can be used to predict where and how such anomalous behaviors are likely to occur;
  • Architectural engineering countermeasures that can be used by the system's developer to either prevent such behaviors or to contain and minimize their impact, thereby mitigating the risk they pose to the safe, secure operation of the system.
  • Architectural engineering techniques and tools that can be used to mitigate many emergent risks and hazards.

The referenced documents on system and software development alone make the paper worth a look.

"Because properties such as safety and security are not intrinsic to individual components in isolation-these properties emerge from the interactions between components or the interactions between a component and its environment or a human user-individual component testing and analysis (e.g., through static and dynamic analysis, fault injection, fuzzing, etc.) can only provide incomplete and indirect evidence of how the component might behave when interoperating with other components within a component-based system. Emergent properties can only be demonstrated through testing that involves component interactions, e.g., pair-wise component testing and testing of whole component assemblies."

I'm not sure I agree with the papers premises that buffer-overflows should be mitigated by 'sandboxing' areas of code that could have buffer-overflows. That might end up being a really big 'sandbox' in some cases. Why not design the code to not allow buffer overflows?

Other sections I find much easier to stomach, such as:

  • Hazards and risks that arise from composition.
  • Parameter-passing issues.
  • Timing and sequencing issues. Right data at the wrong time; firing your 50mm gun before aiming it is not good.
  • Resource conflicts. Race Conditions and Deadlocks.
  • Unanticipated execution of unused/dormant code. One of my favorites pet-peeves is to see 'dead code' in live products.

Section five gives some guidance on FMEA, FMECA, and Hazard Analysis for Component-Based Software:

"As noted in the Joint Software Systems Safety Engineering Handbook, because software has no physical failure modes, Failure Modes and Effects Analysis (FMEA), Failure Mode, Effects, and Criticality Analysis (FMECA),61 and related analysis can be difficult to apply to software intensive systems. This said, software does have functions that can be implemented incorrectly, operate erroneously, or fail to operate at all for various reasons. For component-based software, FMEA/FMECA that strives to identify the causes and potential severity (or criticality, in FMECA parlance) of failures of software functions needs to consider not just errors within individual software components, but errors that may arise from "mismatches", such as expected but-not-received, unexpected, or incorrectly-formatted input from or output to other components."

The Draft goes on to explain ways of mitigating risks.

Unlike most documents out of the Government and Military Academia, this one at least mentions Embedded Software:

"Embedded Software: Software physically incorporated into a physical system, device, or piece of equipment whose function is not purely data processing, or external but integral to that system/device/equipment. The functionality provided by embedded software is usually calculation in support of sensing, monitoring, or control of some physical element of the equipment, device, or system, e.g., mathematical calculations of distances used in calibrating the targeting mechanisms of a weapon system; interpretation and comparison of heat sensor readings in a nuclear-powered submarine engine against a set of safe temperature thresholds."

Alas they still do not grasp the constrained resources of most Embedded Systems.

Appendix C. "Engineering software for survivability" gives a good, short, synopsis what should be considered when designing any Embedded System that needs to keep running, no mater what.

Saturday, June 19, 2010

I'm Scared

This week I spent a couple of days at regional seminars. At the Texas Instruments Technology Days 2010 in Cleveland, the thing I found most interesting is TI's new Analog Mirror. A single, large, pixel of their already popular DLP technology. Must be something cool we can do with this mirror? Dynamic Signs maybe?

The other seminar this week was sponsored by Atmel, to drum up design wins for their AVR32 family of parts, here in the Pittsburgh market region.

Most of the seminar was showing how to use AVR32 Studio that is based on Eclipse. Alas I probably won't use these parts because the seven year old machine supplied by the IT department would never handle such a large application. Just a fresh reboot of the machine already has 230M+ of the 512Meg of memory used with cooperate bloatware mandated by IT (Makes their jobs easier).

What is relevant to us here today is a comment that the instructor made, paraphrased:

We had to switch to Eclipse because all of the people coming to the Embedded World from the Microsoft World did not know how to write their programs without such a tool.

I find that scary. We'll end up with large, and potentially unsafe, expensive products, because Management assumes any programmer that can write a business application or web app. can design a safe embedded system if they only have the right tool.

Something else I find scary is the article Think it - Draw it - Build it by Mark Saunders, at Embedded.com.

Mark introduces us to the new Cypress Semiconductor PSoC Creator embedded design tool, for Cypress's new PSoC 3 and PSoC 5 programmable system-on-chip architectures. The PSoC 5 parts are something I want to investigate for use in a board test system I'm designing. You don't know what the board under test may need in a test system so a configurable system is the way to go.

However I find the marketing of this product family to be scary:

...You will not need to know the CPU architecture we're using, or how the analog comparator or digital timer components are implemented...

...[PSoC Creator] abstracts away the hardware so you do not need to be an expert on the device you are using or the inner workings of peripherals you program it with...

For the comparator is any overdrive required, how much? Is there any hysteresis to prevent oscillation? What happens when the counter rolls over, does it do The Right Thing? Hopefully some quality time spent with a quality data sheet (sadly far to many data sheets lack any quality) would answers these questions. Would the people using Eclipse from the Microsoft world even know to ask such questions?

Now are you starting to see why I'm scared? I fear that we are heading down the path of Think it - Draw it - Build it - Ship it - So we get paid for it - Doesn't mater if it works right.

How scared are you about the quality of our future Embedded Systems?

"I Code to Spec"

JustFred said something in the thread about how to deal with handling names, that I found to be all to depressing because it is true far to often:

I code to spec. The product and marketing departments write the spec (what little there is); the QA department amends the spec with overly specific test cases. I suggest that the spec is incomplete and won't handle...but I'm told, just code it to spec. I recommend changed, but we don't have time for edge cases. I point out potential problems, but we're unlikely to get any of those. I warn of potential compatibility problems but we don't care. Are you just trying to be difficult? If there's a problem QA will catch it. The project is overdue already, and by the way here are some new requirements that need to make it in, and we can't change the release date because we already promised the stockholders. Why is your code so complicated, my twelve-year-old kid could write this.

It's not my fault. I code to spec.

-- http://slashdot.org/comments.pl?sid=1690138&cid=32609504.

What happens when developers are held liable for their code? Do we become Management's scapegoat for problems that they would not let us fix?

Proper handling of Proper Names

Proper handling of Proper Names

Patrick McKenzie recently wrote a blog entry Falsehoods Programmers Believe About Names, based on the understandable rant by John Graham-Cumming, on databases not handling proper names. John's proper last name contains a hyphen.

Patrick gives a list forty assumptions that are wrong when handling the entry of proper names into a system. This article on Slash Dot Org further elaborates and amends Patrick's list.

Alas in my experience the problem of not handling non-alphanumeric characters is not limited web forums. I've wanted to use "+5" and "-5" as net labels in CAD packages in the past. The leading numeric signs caused the program to explode because it did not know how to handle an arithmetic operation in a net label. Which leads to the next problem, that all inputs must be sanitized in some fashion to prevent crashing systems.

What do you do to trade off making inputs secure verses allowing what should be valid proper names and addresses? Unicode and International Components for Unicode can be a help, but they don't always fit into the memory space of small embedded systems. Is there a smaller solution?

Tuesday, June 8, 2010

FTC wants to save News Papers by taxing our electronic designs. Watch live June 15th.

On June 15th the Federal Trade Commission is going to hold a live web cast entitled "How Will Journalism Survive the Internet Age?". Details on the event are listed here. Comments on this event can be left here.

By now you are wondering what this has to do with Embedded Systems or Software Safety. That is explained by this document: POTENTIAL POLICY RECOMMENDATIONS TO SUPPORT THE REINVENTION OF JOURNALISM.

It comes down to the FTC in Government Double Speak is proposing many onerous Taxes and Fees (Taxes are not popular so raise the Fees that don't have the Tax stigma yet), on the products that you and I design.

Just a couple of highlights, of the many, many, new taxes to be discussed at the meeting:

Tax on consumer electronics. A five percent tax on consumer electronics would generate approximately $4 billion annually.
Advertising taxes. They note a considerable amount of our broadcast spectrum has been turned over to disseminating commercial advertisements, and a two percent sales tax on advertising would generate approximately $5 to $6 billion annually. In addition, they suggest that changing the tax write-off of all advertising as a business expense in a single year to a write-off over a 5-year period would generate an additional $2 billion per year.

It goes down hill from there with Them wanting to copyright "Facts" so only News Papers can use them...

Between approximately the year 1439 when Gutenberg invented the Printing Press and the year 1710 when the Copyright was created, information was controlled by the Printing Guilds. It seems that the FTC wants to turn the clock back by 300 years:

"...amending the copyright laws to create a content license fee (perhaps $5.00 to $7.00) to be paid by every Internet Service Provider on each account it provides."

Read this commentary on the report, mostly on how public input is being ignored by Them, by Jeff Jarvis of the New York Post, who tells them "Get of my lawn!". Mr. Jarvis raises the question who asked the FTC to 'help'?

About the FTC reads in part:

When the FTC was created in 1914, its purpose was to prevent unfair methods of competition in commerce as part of the battle to 'bust the trusts.'"

So now the FTC views Internet as the 'Trust' that needs busted. Is the FTC just killing off the trust they can not control, to support the trust that they can control?

I find it interesting that this event is being held at a 'Club', at our expense most likely, the National Press Club specifically. If you don't understand why Government meetings at clubs are bad for you and I, then you need to consider a speech given before the National Economists Club in Washington, D.C. on November 21, 2002 by Ben S. Bernanke:

"What has this got to do with monetary policy? Like gold, U.S. dollars have value only to the extent that they are strictly limited in supply. But the U.S. government has a technology, called a printing press (or, today, its electronic equivalent), that allows it to produce as many U.S. dollars as it wishes at essentially no cost. By increasing the number of U.S. dollars in circulation, or even by credibly threatening to do so, the U.S. government can also reduce the value of a dollar in terms of goods and services, which is equivalent to raising the prices in dollars of those goods and services. We conclude that, under a paper-money system, a determined government can always generate higher spending and hence positive inflation." - Ben S. Bernanke current chairmen of the Federal Reserve.

So we can see that as long as eight years ago it was already in the planing stages for Them to put the screws to you and I!

To wrap this all up watch this video and read 1984 by Gorge Orwell if you never have read it before.

I'm Taxed Enough Already, how about you?