Wednesday, March 30, 2011

The Digi-Key Catalog is Dead


Just in case you missed this press release on the Digi-Key site:

THIEF RIVER FALLS, Minnesota - March 25, 2011 — In an unprecedented industry move, Digi-Key Corporation, a global electronics component distributor, recently announced that it will immediately cease all print versions of its product catalog and TechZone™ Magazine, offering this content exclusively online. This all-digital decision, driven by the company's efforts to be environmentally vigilant, marks a major milestone in Digi-Key's transition into a totally integrated Internet-based distributor, using state-of-the-art technology to support customers and streamline sales for suppliers worldwide....

A colleague of mine commented today [March 30, 2011] "Yeah, they did it for the environment.  They did it to save money!"

I find flipping thought the catalog to be more productive than trying to use their web site when I'm looking for something, like a connector, but don't know exactly what it is I need till I see it. How about you?  I've yet to find modern web based catalogs and magazines, to be as fast as those in the real physical world.  With the chemicals-on-dead-trees version the pages can be turned in well under a second.  I've yet to see that happen with the digital ones.  Has our software become that bloated that the real world is faster than the world of cyberspace?

Digi-Key has not been doing the industry any favors by raising prices 50% every few days on some commodity parts due to the Japan Earthquake.  Alas they are not alone in such actions...





Sunday, March 27, 2011

LightSquared and GPS controversy position statement from the US Government

The US Government run Space-Based Positioning Navigation & Timing National Executive Committee has issues an official statement about potential interference to GPS receivers from the LightSquared, LLC. communications network.

There is quite a purported back story of political intrigue about past heads of the FCC, the White House, the usual problems of politics such as corruption and greed swirling around the net related to GPS being rendered useless by bureaucrats paying back favors verses technically competent people making rational decisions. Due to the fear of lawyers I'll let you do that research for yourself, it is not hard to find.

Sticking to the technical maters the company LightSquared is planing on setting up a new country wide data network, which we sourly need to get AT&T and Verizon out of our lives [Verizon left us without Internet and phone service for a week. My wife tracked down the phone number for the Head Cheese in Washington DC, who said the people giving us repair hassles "will not like hearing from us!", in less than 24 hours a mobile "Tornado Trailer" was supplying us service as the equipment was repaired for the area.], sorry I digress.

LightSquared's plan is to setup 4G-LTE transmitters of thousands of watts in spectrum immediately adjacent to the GPS spectrum.

The fear of those knowledgeable in GPS sensitivities claim that the low cost GPS that most of us have will be blinded by the nearby high power transmitters. It comes down to a lot of power from transmitters close to the receivers on the ground verses picking up the GPS signals from thousands of miles away in space. Which one do you think will win?

Past items about GPS:

Test Driven Development in Embedded C class synopsis

This week I spent three days with James Grenning, and ten other gentlemen, to be educated in Test Driven Development for Embedded C at the LeanDog boat in Cleveland Ohio.

For some background Jack Ganssle did a two part interview with Mr. Grenning last summer [2010], on the subject of TDD in Embedded Development:
A majority of the course is based on James' about to be released new book [May 2011] Test Driven Development for Embedded C, and a lot of insights drawn from his background in agile development methods, James was one of the signers of the Agile Manifesto and Embedded Systems.



The premise of TDD is that you write the tests for the code that needs to be created, before the first line of code is ever written. How can you possible know if your code is bug free if you can not figure out how to write a test for it? As each small test case is written, you watch it fail, then write the code that is being tested, then run the test again to see that it passes. In the method of design from the top down and build from the bottom up, our products now are being built from a solid foundation of small tested functions, rather than on untested code full of unknown bugs, sinking in the Big Ball of Mud.

More thought is put into the upfront requirements (the what of the product and how it gets validated) and specifications (the how the product gets built and verified), rather than seeing the schedule slip month by month trying to get all of the bugs out of the code by heroic effort at the end of the development process.

In the words of Confucius: "I hear and I forget. I see and I remember. I do and I understand." A large part of the class was hands on examples of writing tests for supplied examples, and then seeing how our version compared to that written by James. Some of the examples had deliberate bugs and build traps that those new to TDD would fall into, such as a module being pulled from a library causing a name clash when a header file gets edited. There is the real production code, test code, and sometimes fake or 'mock' code. The 'mock' code takes the place of some item that does not yet exist or would be to time consuming to test. If things were not set up correctly the fake/mock code would collide with the real production code. Myself in this one case I still think I prefer using Makefiles and different build directories rather than James' linker library method. Neither is right or wrong, but a mater of taste and style.
All of the tests were run using the test harness cppUTest, which can test plan old C code and C++ code.

TDD is not just for brand new projects, it is equally suited to cleaning up the tangled mess that some legacy code has become after years of changes by different people. Part of the class was on 'refactoring' and the difference between that and redesigning the code, and how to apply the test cases. Pick out a single area of code with the end target to be a single well tested function. When tests have been established for the existing code the code is incrementally moved, in small steps, to get visibility points for more tests, until the code can be replaced with new tested clean code. James and the book are far more eloquent at describing the process than I have been here. James' blog and papers, several covering Embedded Systems, are a great resource to take advantage of, for example: Legacy Code Change - a Boy Scout Adds Tests.

One of the other tools mentioned, but not actually used in class is FitNesse acceptance testing framework.

Alas TDD is not a panacea for all of our collective code problems. It may not cover integration issues such has multi-threading race conditions [Use the Erlang principles of modify nothing and everything is a message, and threading issues vanish, even on MultiCore parts], it also might not find issues that are dependent on the target hardware integer size, when the host and target differ, unless the tests can also be run on the target, which is strongly recommended when possible.

At lunch Jon Stahl, our gracious host for the three day class, the owner of LeanDog, gave us lunch time talks, such has the one on Agile & Lean From The Top Down: Executives Practicing Agile and Seeing Constraints, Kanban Explained. Take a look at the slides to see the LeanDog Boat, the floating office, and the Lean Dog's themselves, Otis and Iggy. Talk to Jon about improving your bottom line in your business area and/or your software development area.

Also it is always good to get out, as sometimes you learn things at lunch that you would never know about such as The Tau Manifesto on how Pi is (still) wrong [YouTube Video]...

If you have the opportunity to take James TDDinC class when he is in your area, make the time for it, and if you are in Cleveland stop and say hello to Otis and Iggy, and sign up for some traning.

Saturday, March 26, 2011

Make Makefile Tip #4: GNU Make Standard Library (GMSL)

Next tip in our on going series of Make/Makefile tips is to use the GNU Make Standard Library (GMSL). GMSL supplies the features that you've wanted to use in writing your complex Makefile but could not figure out how to get it done.

The GNU Make Standard Library (GMSL) is a collection of functions implemented using native GNU Make functionality that provide list and string manipulation, integer arithmetic, associative arrays, stacks, and debugging facilities. The GMSL is released under the BSD License.

  • Associative Arrays
  • Integer Arithmetic Functions
  • List Manipulation Functions
  • Logical Operator: AND
  • Logical Operator: NAND
  • Logical Operator: NOR
  • Logical Operator: NOT
  • Logical Operator: OR
  • Logical Operator: XOR
  • Miscellaneous and Debugging Facilities
  • Named Stacks
  • Set Manipulation Function
  • String Manipulation Functions

Past Makefile Tips:

How do I get started in this field? With "Understanding Small Microcontrollers".

A recurring cyclic theme I'm sure we all see, is someone with the Knack wants to get started in the embedded field. My recommendation for the first thing to start with is the book, for the long obsolete 68HC05, Understanding Small Microcontrollers by James M. Sibigtroth.

Which I'm sure many readers of this blog might find odd. Just because it is old doesn't mean it is still not educational. The book covers the fundamentals like computer numbers and other things that more experience authors gloss over. It is also free, a price a budding seven year old can afford, until someone can drive them to the library. Obviously lots of resources on Internet, but to me the best way for a young person to learn is with books, to spur the imignination.

Saturday, March 19, 2011

Japanese Earthquake Update from International Atomic Energy Agency

Wanted to point out the site of the International Atomic Energy Agency as a follow up to my post last week, Japan nuclear power plant at Fukushima-Daiichi to become a real China Syndrome?.

IAEA has a page that they are updating regularly with information coming from Japan.

Also Scientific American [TM] has posted a page How Radiation Threatens Health. Lets hope the rest of us don't need to find out.

Maybe it is time to take a look at Thorium Fluoride Molten Salt Reactors that are passively safe, rather than safe by extraordinary measures?

The glue that holds the electronic industry together falls apart, Bismaleimide Triazine (BT) resin shortage.

We are now one week out from Japan's worst earthquake and the problems that effect the electronics supply chain are starting to appear.

Something that most of us have never heard of, Bismaleimide Triazine (BT) resin, is about to impact our electronic production lives. Mitsubishi Gas Chemical Company, Inc. (MGC) seems to be the major supplier of this material to the world. This was posted on their website on March 14th, 2011:

Recovery Working of MGC's Electronic Materials Subsidiary

This is a brief report on the most recent status of recovery operations at Electrotechno Co.,

Ltd. (Nishishirakawa-gun, Fukushima), the MGC electronic materials production subsidiary affected by the major earthquake that struck off the cost of Eastern Japan on March 11,

2011:

The earthquake has caused damage to part of the interior of the Electrotechno buildings and to some equipment; however, power and gas supplies have now been restored.

Since Monday, March 14, Electrotechno management and staff have been working with construction experts to conduct a close inspection of its buildings and equipment, while at the same time making every effort to restore operations. On the basis of information obtained during this inspection, MGC will announce its outlook for the restoration of Electrotechno on Friday, March 18. [None was posted today, Saturday March 19th.]

At present, MGC estimates that product supply from Electrotechno will be hindered for the immediate future, but is committed to making every effort to restore production as soon as possible. Further reports will be provided as soon as more information becomes available.

Where is BT used? It is the 'glue' that holds together the glass yarn fibers, which is also now in short supply due to damage to a different factory, that make up FR4 PCB laminates that almost all of us build our products on.

BT is also used to hold the chip dies in place on the packaging substrate material. So even if we have the printed circuit board itself, we still might not be able to get the chips from ICs to FETs, to put on it. Blank wafers will also be in short supply from damage to yet other factories.

Alternates suppliers for BT exists but unless the that maternal has already been qualified it can lead to cracking of the body of the part when exposed to heat, as no two manufacturer's process is exactly alike. It can take a long time to qualify new material.

The cracking can lead to moisture ingress that can damage ICs when they go through the soldering process. A single tiny, relative to the size of the IC, drop of water can produce enough steam to fracture the die from the sudden pressure change when the water transitions from liquid to gas.

Moisture Sensitivity Level (MLS) is one of those obscure items found on data sheets, if it can be found at all, that most designers ignore thinking it has no relevancy to them. Sometimes the MLS ratings are not even on the data sheets, but in separate reliability documents that few look at. MLS is a number from one to six indicating how long an IC can be exposed to room air before it would be damaged by the soldering process due to moisture ingress. ICs don't come in those big nitrogen filled silver bags just for the fun of it. They are in the nitrogen to keep the moisture out.

An IC with a MLS of one, can be exposed to air indefinitely, a MLS of six can not be exposed at all, it must be baked, at a low to moderate temperature before it can be exposed to the high temperatures of the soldering process, to drive the moisture out, then soldered while still warm.

We can only hope that those at the top have learned their lessons in relying on single suppliers in a single location. Maybe it is time to bring manufacturing back home? Anyone know of any second sourced modern Micros anymore? Companies don't like second sourcing as there is little profit to be had. Once again greed corrupts all. The hording, "Stockpiling for Q311", and gouging has already started; DigiKey raised the prices on some capacitors this week by fifty percent. Also these problems are only going to make the counterfeit parts problem worse.

In the end lets not get so self-absorbed that we lose sight of those in Japan that have lost literally everything, and do what we can to help them.

Sunday, March 13, 2011

Senate Armed Services Committee Announces Investigation Into Counterfeit Electronic Parts in DoD Supply Chain

Seems that counterfeit electronic parts have gotten so bad that now even the politicians are getting involved. I've personally experienced this, having gotten some tantalum capacitors that were marked with higher voltages than their true working voltage. They'd last about six months in the field then *exploded*. With the problems in Japan right now, I expect there will be massive increase in counterfeiting of parts that should be coming from the legitimate Japan supply chain:

WASHINGTON – Following is a statement by Senators Carl Levin (D-Mich.) and John McCain (R-Ariz.), chairman and ranking member of the Senate Committee on Armed Services, regarding the committee’s investigation into counterfeit parts in the DoD supply chain:

"U.S. Senate Armed Services Committee U.S. Senate Armed Services Committee has initiated an investigation into counterfeit electronic parts in the Department of Defense's supply chain. Counterfeit electronic parts pose a risk to our national security, the reliability of our weapons systems and the safety of our military men and women. The proliferation of counterfeit goods also damages our economy and costs American jobs. The presence of counterfeit electronic parts in the Defense Department’s supply chain is a growing problem that government and industry share a common interest in solving. Over the course of our investigation, the Committee looks forward to the cooperation of the Department of Defense and the defense industry to help us determine the source and extent of this problem and identify possible remedies for it."

James Grenning to present "Test is not for finding bugs", Cleveland March 23, 2011, 5:30PM

James_Grenning_March23_2011_Event_Flyer

It is with great pleasure that Firmware Engineers of Northeast Ohio, Cleveland Agile Group (CleAg), and the IEEE Cleveland Computer Society welcome James Grenning. He will be speaking to us at our upcoming event on March 23rd, 2011.

James Grenning trains, coaches, and consults worldwide. His considerable experience brings depth in both technical and business aspects of software development. James is leading the way in introducing Agile practices to the embedded world. He invented Planning Poker and is one of the original authors of the Manifesto for Agile Software Development (February 2001).

Mr. Grenning will present "Test is not for finding bugs".

Test is something that has to get done sometime before shipping the product. Test can wait while we do the important work of specifying, designing and coding the system. Test helps find bugs. Test happens at the end.

Wait! Don’t quote me on that!

Test is not that unpleasant activity at the end of the *development phase*, it is an integral and critical part of everyday work. It does not add drudgery, and overhead, it adds rewarding feedback and makes it possible to put more value into the software instead of wasting time chasing bugs. Test is not about finding bugs anymore. Test is specification; test is defect prevention, test drives good designs. Tests must be largely automated. You may think that you cannot afford to automate; when in reality, you cannot afford not to.

This will be the first time that Cleveland Agile Group (CleAg) and FENEO/IEEE have joined forces to bring such talent to Cleveland.


Donations are very welcome and appreciated. A donation box will be available at the sign in table. Please make all checks out to IEEE Cleveland. Also, FENEO is always looking for new sponsors.


Space is limited! To reserve your seat, please RSVP by March 21 at http://www.clevelandieee.org/jgrsvp.

Saturday, March 12, 2011

Are we to reliant on GPS/GNSS? Royal Academy of Engineering says we are.

I have blogged in the past about our reliance on GPS technology here Politicians replace Air Traffic Control RADAR with GPS and here Scientists, Politicians Take Electromagnetic Pulse (EMP) Threat Seriously. Human Exposure to EM Fields, now the Royal Academy of Engineering in London has released a new report: Global Navigation Space Systems: reliance and vulnerabilities.

This report details how we have become to reliant on the global navigation satellite systems (GNSS), Global Positioning System (GPS) is currently the most widely used and best known example of GNSS.  GPS is used a for more things than just getting us from Point-A to Point-B with maps of dubious accuracy.  Telecommunication Network timing and the International Banking System are a couple of examples of 'hidden' uses of GPS. The timing aspect of GPS is used by the infrastructure systems more than the position aspect of the system.  The report covers other infrastructure uses, and how they might be attacked and exploited.

The report also says that there should be an independent backup to GPS.  It is interesting to note that the U.S. recently destroyed the LORAN system, with explosives no less, under the guise of saving money; it cost more to dismantle the system that it would have cost to keep it running.  The paranoid among us might think there is a conspiracy to get everyone relying on a technology then take it away to advance a yet unknown agenda.

Japan nuclear power plant at Fukushima-Daiichi to become a real China Syndrome?

The headline reads: Huge blast at Japan nuclear power plant at Fukushima-Daiichi, are we seeing a real life version of the China Syndrome about to playing out? Let us all pray that is not the case.

The China Syndrome is when there is a loss of coolant in the reactor vessel. The idea being the reaction becomes so hot that it burns down through the Earth until it comes out the other side in China. What really happens is the hot reaction burns down until it hits Ground Water, which is then turned into superheated steam, blowing the whole mess back up into the environment.

Obviously somewhat outdated, as it says "Operating Fully", design information on Fukushima-Daiichi can be found at Global Energy Observation. Fukushima Daiichi is located at 141.0329686159818, 37.425775181836 if you'd like to find it on Google Earth.

The Nuclear Regulatory Commission (NRC) regulates commercial nuclear power plants that generate electricity, in the United States. They have a lot of educational material on how Nuclear Reactors work. There are several types of these power reactors. The two types that are most prevalent, in the United States, only the Pressurized Water Reactors (PWRs) and Boiling Water Reactors (BWRs) are in commercial service. I've been told by people more knowledgeable than I that Japan's reactor is a BWR type.




Images curiosity of the NRC.


The Teachers instruction manual Reactor Concepts Manual might be of interest to you to read, especially chapter five:


There are various radiation monitoring systems in operation around the world. The one of most interest in northern Japan is at present either damaged or disabled. Which shows that we must always consider catastrophic design issues in our Embedded Systems designes.

Other monitoring systems that I know of are the German radioactivity monitoring network ODL-Netz, which opens a map with about 1800 stations, the stations are clickable and show the measurements in the last seven days. There is also the Community Environmental Monitoring Program in Nevada, where Gamma Radiation is measured and recorded near real time.

Do you know of any others, particularly ones that are World Wide? [Shades of Harold King's novel Red Alert where a super secret world wide radiation monitoring system is a central part of the plot.]

The U.S. Geological Survey has a map of the Latest Earthquakes in the World - Past 7 days

Some have speculated that recent Earthquakes have been the result of increased solar activity, such solar activity is documented by NOAA / Space Weather Prediction Center.

For those that really want to crawl out on a limb, some say around March 20th, and/or April 18th, will be the next massive Earthquake, probably in New Zealand, based on a recent trend, and the Moon reaching lunar perigee, its closest approach until 2016. That there is a connection between the Moon and Earthquakes is not speculation, it is Tidal forces at play.

Saturday, March 5, 2011

NASA to outsources Validation and Verification. Do you want the job?

Sometimes you run into things that just want you to shake your head and wonder, what is going on here? Case in point the National Highway Traffic Safety Administration outsourced the study of Toyota's Sudden Acceleration problem to NASA because they where perceived to be the best around to study this problem. Now this week we find that NASA wants to outsource their Validation and Verification work. Good enough for cars but not space shots??

Before getting to the NASA solicitation I want to define what Validation and Verification mean. Like many people confuse Weather Watches and Warnings many people are unclear on the difference between Validation and Verification.

  • Weather watch is used when the risk of an event has increased, but its occurrence and timing is still uncertain.
  • Weather warning is used for conditions posing a threat to life or propert.
Definition of Terms:

The FDA's Glossary of Computerized System and Software Development Terminology, defines many of the terms used on in this blog and our Software Safety site.

Defect:  The difference between the expectation and the actual results.

Validation and Verification:
Validation and Verification are a set of terms you find when working with Software Safety.  Many people do not understand how they differ from each other.

These are my working definitions for Validation and Verification (V&V):
  • Validation: Have we built the correct device?  Do we meet the customer's requirements?
  • Verification: Have we built the device correctly? Did we find and remove all of the 'bugs'?

Requirements and Specifications:

Clarifying the distinction between the terms "requirement" and "specification" is important.

My working definitions for Requirements and Specifications:

Requirements are a statement of what the customer wants and needs.  Requirements are used for validation.  Specifications are the documentation of how the customer requirements are met by the system design.  Specifications are used for verification.

A requirement can be any need or expectation for a system or for its software. Requirements reflect the stated or implied needs of the customer, and may be market-based, contractual, or statutory, as well as an organization's internal requirements. There can be many different kinds of requirements (e.g., design, functional, implementation, interface, performance, or physical requirements). Software requirements are typically derived from the system requirements for those aspects of system functionality that have been allocated to software. Software requirements are typically stated in functional terms and are defined, refined, and updated as a development project progresses. Success in accurately and completely documenting software requirements is a crucial factor in successful validation of the resulting software, and the project as a whole.

A specification  "means any requirement with which a product, process, service, or other activity must conform." (See 21 CFR§820.3(y).) It may refer to or include drawings, patterns, or other relevant documents and usually indicates the means and the criteria whereby conformity with the requirement can be checked. There are many different kinds of written specifications, e.g., system requirements specification, software requirements specification, software design specification, software test specification, software integration specification, etc. All of these documents establish "specified requirements" and are design outputs for which various forms of verification are necessary.

Device failure (21 CFR§821.3(d)). A device failure is the failure of a device to perform or function as intended, including any deviations from the device’s performance specifications or intended use.

Now with that background under our belt we can get a better grasp of what NASA is seeking in their, Proposals For Software Verification And Validation. This contract will provide resources for NASA-directed software verification and validation services; software safety assurance support for agency missions; and potential software development work for other government agencies.

NASA INDEPENDENT VERIFICATION AND VALIDATION SERVICES
Solicitation Number: NNG11310421R
Agency: National Aeronautics and Space Administration
Office: Headquarters
Location: Office of Procurement (HQ)

Synopsis:

Added: Sep 21, 2010 3:02 pm Modified: Mar 02, 2011 5:35 pm

NASA/Goddard Space Flight Center announces the release of the final RFP for NASA Independent Verification and Validation Services. Proposals submitted in response to this RFP shall be submitted by April 5, 2011, at 3:00pm Eastern Standard Time.The due date for questions or comments is March 24, 2011 to ensure our timely response. All questions and comments must be submitted in writing via email to the following email addresses: Laura.E.Freeman[-a-t-despaming]nasa.gov. Telephone questions will not be accepted. Technical documents related to this procurement can be obtained from the Procurement Proposal Library at: http://www.nasa.gov/centers/ivv/recompete/index.html. Documents related to this procurement will be available over the Internet. These documents will reside on a World Wide Web (WWW) server, which may be accessed using a WWW browser application. The Internet site, or URL, for the NASA/HQ Business Opportunities home page is http://prod.nais.nasa.gov/cgi-bin/eps/bizops.cgi?gr=D&pin=04 Offerors are responsible for monitoring this site for the release of the solicitation and any amendments. Potential offerors are responsible for downloading their own copy of the solicitation and amendments (if any).

Here is the link to NASA's V&V Facility, and I'll repeat the link I gave last week to NASA's Software Safety Guidebook. NASA has other standards and programs that are worth studying such as their Standards and Technical Assistance Resource Tool, for example the Software Formal Inspections Standards and Langley's Formal Methods.

So do you think you and I should team up and add ourselves to the Interested Vendors List for this V&V opportunity, or maybe one of the other 23,000+ opportunities?

Mazda6 "Bugs" and real world design considerations

While we may never know for sure if Toyota's have software bugs, we do know for certain than Mazada6 Sedans have them. Okay, for the purists it is not a bug but an arachnid, specifically a Yellow Sac spider. Seems these spiders like building nests in the fuel system. I know from experience that some spiders are attracted to cretin smells, like propane, and can be a problem to keep out of backup generator systems.

Designing for rodents and other creepy crawlies is one of those things I've never seen show up in any project requirement document, yet experience has tought that these kind of real world problems must be considered in any system design.

Ever consider what happens when a Red Ant walks across the high impedance A/D sensors traces on the circuit board? One of those cases that once you see it, the problem is obvious but all of the field reports and remote debugging facilities made no sense at all.

Conformal coating is an obvious solution to this 'bug' problem, but coating is not a panacea for all problems. Like all things in hardware design there are always tradeoffs.

A very common misconception is that Conformal Coating is a Hermetic Seal. It is used a lot in the Coal Mines, and the Electronic Industry in general, to keep the caustic dust, and other contaminants, off circuit boards.

As Conformal Coating is not a hermetic seal, what real happens is the impurities in the water are kept away from the circuit, but the water itself reaches the traces. Since the water is now fairly devoid of contaminates the water acts more like a dielectric insulator. You never notice it in a low impedance digital circuit, but unless debugging is an obsession don't let it get near a RF tuning circuit or a high impedance Wireless Sensor Network circuit.