Sunday, September 27, 2009
E-Prescribing prior art examples and code.
Sunday, May 31, 2009
When was the very first electronic prescription used? I say ~1978.
I asked Fred Trotter, advocate of Free Open Source Software, in the Medical Field, a question about FOSS certification of Medical Software.
I mentioned that I did work in this field back in the 70's; In actuality it seems we were creating the field in the 70's. Fred asked that I cover this in my blog, so here we are.
"Bob,Your work on e-prescribing is an important source of prior-art!! Please consider detailing exactly what you did, and how you did and even source code as well as the dates that you did this on your blog!!" - Fred Trotter
In the years 1977 to 1982 I was working my way through school by writing medical software for the office of Dr. Armour and Dr. McDowell, in Farrell PA.
Keep in mind the time frame, 1977, the IBM PC had not yet be invented, and Internet effectively did not exist outside of academia and the military. The top end, off-the-shelf, computers of the day were the Apple-II and the TRS-80 Model-One. Very few people knew what a personal computer was, as I'm not sure the term "personal computer" had yet been coined at that point in time.
I was a classic Nerd, the movie Revenge of the Nerds was a documentary of my life. [Us Nerds did win by the way, or you would not be reading this right now would you?] My father knew Dr. Armour through their shared interest in Amateur Radio.
Dr. Armour was interested in the new area of computers and how they might help his medical practice become more efficient. Mostly Dr. Armour and McDowell practiced obstetrics, that is they delivered babies and new born follow up care. As giving birth and what most new born's do, have not changed in many Milena, Dr. A. wanted a standardized menu you where you could enter the common things that would happen, so that the printed notes could be put in the charts, and the common prescriptions for the new mothers and children could be printed. I'm sure we all know how bad the handwriting of most all doctors are. That is because they have to write a lot, and it gets tiring.
Dr. A. setup his personal TRS-80 Model-I, as the Model-III did not yet exist; it would be out soon, in the back office of his practice, gave me a key to the back door, had me set in on a few exams, with the permission of the mothers to be, and exams of the new born's, gave me a few notes and long discussions on what he wanted, which I then coded up in BASIC. C compilers were rare and expensive then. After a few back-and-fourth sessions a basic system was setup to try out. Today this set up would be termed and Expert System, but I did not know that at the time.
I don't recall for sure if we moved the Model-I to a cart, or if we had gotten a second Model-I, anyway a cart with computer and *noisy* line printer was placed in one of the exam rooms. Eventually all of the exam rooms had TRS-80 Model-III's in them, each with quitter printers (remember the frequently sleeping babies in the room??). Networking as we know it today did not exist, it was just starting to come out in its earliest forms.
The two things I remember most are spending time in the exam rooms, remember as a Nerd, that always made me a bit queasy, and the day Dr. Armour came in and said he just got a phone call from the Druggist, I think it was RiteAid at the Shenango Valley Mall, but I don't recall for sure.
The Druggists called us and asked if the printed prescription he was holding in his hand was for real. I remember distinctly asking "Is there a problem? What is wrong with it?". To which Dr. A. replied, "No, they loved them,
I know Dr. Armour did discuss doing this setup with other doctors in the area, but I don't recall anything really coming of it. Remember small computers were still unknown to most anyone at this point in time.
I know Dr. Armour and I never even considered patenting or copyrighting the system back then, the nature of the time. Not sure this would count as Open Source, the term did not exist then. We would have given the code to anyone that wanted it. Few doctors seemed to 'get it' then. I wonder if they even get it now at times?
I know I don't have any of that source code or notes any longer. This does give me a good reason to go visit Dr. Armour and see how he is doing, and to see if he has anything left. He retired along ago.
One other thing worth mentioning was that Dr. A. and I attended the very first MUMPS conference in DC in 1981. Still have the DEC MUMPS badge around here some place, and the manuals on the language. Dr. A. and I thought it would be a good way to get things networked. The technology that Dr. A. could afford at that point in time, DEC machines, was just a bit out of reach, so we never did a lot with it. To this day I still look in on what is happening with MUMPS once in a while. There are many places that still use MUMPS. Still have the books:
- Computers in Ambulatory Medicine; Proceedings of the Joint Conference of the Society of Computer Medicine and the Society for Advanced Medical Systems. October 30-November 1, 1981 Sheraton Washington Hotel, Washington, D.C.
- A Manual of COMPUTERS IN MEDICAL PRACTICE.
- Computer Programming In ANS MUMPS. A self-instruction manual for non-programmers, by Arthur F. Krieg and Lucille K. Shearer.
The bottom line is can anyone point to a earlier date than the late 1970's for e-prescribing?
Use offsetof() to avoid structure alignment issues in C
Dan Saks wrote Padding and rearranging structure members; Here's what C and C++ compilers must do to keep structure members aligned, at Embedded.com recently.
No discussion of structure alignment is complete without covering offsetof() from stddef.h.
When I mentioned this to Dan he pointed me to his article on Catching errors early with compile-time assertions, where he does mention offsetof().
offsetof() gives you the offset, or the number of bytes, of a particular structure item in the C language, from the start of the structure. This makes writing safe portable code much easier with structures, as when offsetof() is used properly there are no longer worries about how different compilers might align the structure members, on machines of different word sizes.
Sunday, May 10, 2009
Should Developers Be Liable For Their Code?
ZDNet has an interesting article about if developers should be liable for the code they write:
Software companies could be held responsible for the security and efficacy of their products, if a new European Commission consumer protection proposal becomes law.
Commissioners Viviane Reding and Meglena Kuneva have proposed that EU consumer protections for physical products be extended to software. They suggested change in the law is part of an EU action agenda put forward by the commissioners after identifying gaps in EU consumer protection rules.
The Linux Journal and SlashDotOrg, have follow ups. If you are easily offended by vulgar language, best avoid the SlaDotOrg link.
Most causes of system faults are created before the first line of code is written, or first schematic is drawn. The errors are caused by not understanding the requirements of the system.
What do you think?
Sunday, April 19, 2009
The Power of Ten 10 Rules for Writing Safety Critical Code
I just came across a site in an ad on Embedded.com that every reader of this blog needs to check out:
The Power of Ten 10 Rules for Writing Safety Critical Code.
Their rule #10 comments supports our position you want every useful compiler warning you can get.
Do you just ignore Compiler Warnings?
Something I just saw on the AVR-GCC list:
"I mean the compiler gives some of the most stupid warnings, such as , when a function that is declared but not used..."
or a past favorite of mine "It is only a warning, just ignore it". Yellow colored traffic lights are "only warnings", that most people do seem to ignore, and governments are gaming to enhance revenue; sorry wrong blog...
I have always had a zero tolerance for warnings in code. If you have a warning in your code, your code is broken.
If you are using GCC here are the warnings you can enable, that I use in my own Makefiles:
Make sure you have -W and -Wall in your CFLAGS.
CFLAGS += -W -Wall -Wstrict-prototypes -Wchar-subscripts
I generally run with every warning/error message turned on with the exception of pedantic and unreachable-code. The later frequently gives bogus results and the former goes off on commonly accepted code.
# -Werror : Make all warnings into errors. CFLAGS += -Werror # -pedantic : Issue all the mandatory diagnostics listed in the C # standard. Some of them are left out by default, since they trigger frequently # on harmless code. # # -pedantic-errors : Issue all the mandatory diagnostics, and make all # mandatory diagnostics into errors. This includes mandatory diagnostics that # GCC issues without -pedantic but treats as warnings. #CFLAGS += -pedantic #-Wunreachable-code #Warn if the compiler detects that code will never be executed. [Seems to give bogus results] #CFLAGS += -Wunreachable-code #Warn if an undefined identifier is evaluated in an `#if' directive. CFLAGS += -Wundef # Dump the address, size, and relative cost of each statement into comments in # the generated assembler code. Used for debugging avr-gcc. CFLAGS += -msize # -Winline : Warn when a function marked inline could not be # substituted, and will give the reason for the failure. CFLAGS += -Winline Most of the following are turnned on via -Wall: # Functions prologues/epilogues expanded as call to appropriate # subroutines. Code size will be smaller. Use subroutines for function # prologue/epilogue. For complex functions that use many registers (that needs # to be saved/restored on function entry/exit), this saves some space at the # cost of a slightly increased execution time. CFLAGS += -mcall-prologues # Use rjmp/rcall (limited range) on >8K devices. On avr2 and avr4 architectures # (less than 8 KB or flash memory), this is always the case. On avr3 and avr5 # architectures, calls and jumps to targets outside the current function will # by default use jmp/call instructions that can cover the entire address range, # but that require more flash ROM and execution time. #CFLAGS += -mshort-calls # Do not generate tablejump instructions. By default, jump tables can be used # to optimize switch statements. When turned off, sequences of compare # statements are used instead. Jump tables are usually faster to execute on # average, but in particular for switch statements where most of the jumps # would go to the default label, they might waste a bit of flash memory. # CFLAGS += -mno-tablejump # Allocate to an enum type only as many bytes as it needs for the declared # range of possible values. Specifically, the enum type will be equivalent to # the smallest integer type which has enough room. # CFLAGS += -fshort-enums # Dump the address, size, and relative cost of each statement into comments in # the generated assembler code. Used for debugging avr-gcc. CFLAGS += -msize # Dump the internal compilation result called "RTL" into comments in the # generated assembler code. Used for debugging avr-gcc. # CFLAGS += -mrtl # Generate lots of debugging information to stderr. #CFLAGS += -mdeb #-Wchar-subscripts #Warn if an array subscript has type char. This is a common cause of error, as programmers often forget that this type is signed on some machines. This warning is enabled by -Wall. # #-Wcomment #Warn whenever a comment-start sequence `/*' appears in a `/*' comment, or whenever a Backslash-Newline appears in a `//' comment. This warning is enabled by -Wall. # #-Wfatal-errors #This option causes the compiler to abort compilation on the first error occurred rather than trying to keep going and printing further error messages. # #-Wformat #Check calls to printf and scanf, etc., to make sure that the arguments supplied have types appropriate to the format string specified, and that the conversions specified in the format string make sense. # #-Winit-self (C, C++, Objective-C and Objective-C++ only) #Warn about uninitialized variables which are initialized with themselves. Note this option can only be used with the -Wuninitialized option, which in turn only works with -O1 and above. # #-Wimplicit-int #Warn when a declaration does not specify a type. This warning is enabled by -Wall. # #-Wimplicit-function-declaration #-Werror-implicit-function-declaration #Give a warning (or error) whenever a function is used before being declared. The form -Wno-error-implicit-function-declaration is not supported. This warning is enabled by -Wall (as a warning, not an error). # #-Wimplicit #Same as -Wimplicit-int and -Wimplicit-function-declaration. This warning is enabled by -Wall. # #-Wmain #Warn if the type of `main' is suspicious. `main' should be a function with external linkage, returning int, taking either zero arguments, two, or three arguments of appropriate types. This warning is enabled by -Wall. # #-Wmissing-braces #Warn if an aggregate or union initializer is not fully bracketed. In the following example, the initializer for `a' is not fully bracketed, but that for `b' is fully bracketed. # # int a[2][2] = { 0, 1, 2, 3 }; # int b[2][2] = { { 0, 1 }, { 2, 3 } }; # #This warning is enabled by -Wall. # #-Wmissing-include-dirs (C, C++, Objective-C and Objective-C++ only) #Warn if a user-supplied include directory does not exist. # # #-Wparentheses #Warn if parentheses are omitted in certain contexts, such as when there is an assignment in a context where a truth value is expected, or when operators are nested whose precedence people often get confused about. # #This warning is enabled by -Wall. # #-Wsequence-point #Warn about code that may have undefined semantics because of violations of sequence point rules in the C standard. # #This warning is enabled by -Wall. # #-Wreturn-type #Warn whenever a function is defined with a return-type that defaults to int. Also warn about any return statement with no return-value in a function whose return-type is not void. # #This warning is enabled by -Wall. # #-Wswitch #Warn whenever a switch statement has an index of enumerated type and lacks a case for one or more of the named codes of that enumeration. (The presence of a default label prevents this warning.) case labels outside the enumeration range also provoke warnings when this option is used. This warning is enabled by -Wall. # #-Wswitch-default #Warn whenever a switch statement does not have a default case. # #-Wswitch-enum #Warn whenever a switch statement has an index of enumerated type and lacks a case for one or more of the named codes of that enumeration. case labels outside the enumeration range also provoke warnings when this option is used. # #-Wtrigraphs #Warn if any trigraphs are encountered that might change the meaning of the program (trigraphs within comments are not warned about). This warning is enabled by -Wall. # #-Wunused-function #Warn whenever a static function is declared but not defined or a non-inline static function is unused. This warning is enabled by -Wall. # #-Wunused-label #Warn whenever a label is declared but not used. This warning is enabled by -Wall. # #-Wunused-parameter #Warn whenever a function parameter is unused aside from its declaration. # #-Wunused-variable #Warn whenever a local variable or non-constant static variable is unused aside from its declaration This warning is enabled by -Wall. # #-Wunused-value #Warn whenever a statement computes a result that is explicitly not used. This warning is enabled by -Wall. # #To suppress this warning cast the expression to `void'. # #-Wunused #All the above -Wunused options combined. # #-Wuninitialized #Warn if an automatic variable is used without first being initialized or if a variable may be clobbered by a setjmp call. # #This warning is enabled by -Wall. # #-Wstring-literal-comparison #Warn about suspicious comparisons to string literal constants. In C, direct comparisons against the memory address of a string literal, such as if (x == "abc"), typically indicate a programmer error, and even when intentional, result in unspecified behavior and are not portable. # #-Wall # All of the above `-W' options combined. This enables all the warnings about # constructions that some users consider questionable, and that are easy to # avoid (or modify to prevent the warning), even in conjunction with macros. # This also enables some language-specific warnings described in C++ Dialect # Options and Objective-C and Objective-C++ Dialect Options. # # # # #-Wextra #-Wfloat-equal #Warn if floating point values are used in equality comparisons. # #-Wtraditional (C only) #Warn about certain constructs that behave differently in traditional and ISO C. Also warn about ISO C constructs that have no traditional C equivalent, and/or problematic constructs which should be avoided. # #-Wdeclaration-after-statement (C only) #Warn when a declaration is found after a statement in a block # # #-Wshadow #Warn whenever a local variable shadows another local variable, parameter or global variable or whenever a built-in function is shadowed. # # #-Wunsafe-loop-optimizations #Warn if the loop cannot be optimized because the compiler could not assume anything on the bounds of the loop indices. # # #-Wpointer-arith #Warn about anything that depends on the 'size of' a function type or of void. GNU C assigns these types a size of 1, for convenience in calculations with void * pointers and pointers to functions. # #-Wbad-function-cast (C only) #Warn whenever a function call is cast to a non-matching type. For example, warn if int malloc() is cast to anything *. # #-Wcast-qual #Warn whenever a pointer is cast so as to remove a type qualifier from the target type. For example, warn if a const char * is cast to an ordinary char *. # #-Wcast-align #Warn whenever a pointer is cast such that the required alignment of the target is increased. For example, warn if a char * is cast to an int * on machines where integers can only be accessed at two- or four-byte boundaries. # #-Wwrite-strings #When compiling C, give string constants the type const char[length] so that copying the address of one into a non-const char * pointer will get a warning; when compiling C++, warn about the deprecated conversion from string constants to char *. These warnings will help you find at compile time code that can try to write into a string constant, but only if you have been very careful about using const in declarations and prototypes. Otherwise, it will just be a nuisance; this is why we did not make -Wall request these warnings. # #-Wconversion #Warn if a prototype causes a type conversion that is different from what would happen to the same argument in the absence of a prototype. This includes conversions of fixed point to floating and vice versa, and conversions changing the width or signedness of a fixed point argument except when the same as the default promotion. # # #-Wsign-compare #Warn when a comparison between signed and unsigned values could produce an incorrect result when the signed value is converted to unsigned. This warning is also enabled by -Wextra; to get the other warnings of -Wextra without this warning, use `-Wextra -Wno-sign-compare'. # # #-Waggregate-return #Warn if any functions that return structures or unions are defined or called. (In languages where you can return an array, this also elicits a warning.) # # #-Wstrict-prototypes (C only) #Warn if a function is declared or defined without specifying the argument types. (An old-style function definition is permitted without a warning if preceded by a declaration which specifies the argument types.) # #-Wold-style-definition (C only) #Warn if an old-style function definition is used. A warning is given even if there is a previous prototype. # #-Wmissing-prototypes (C only) #Warn if a global function is defined without a previous prototype declaration. This warning is issued even if the definition itself provides a prototype. The aim is to detect global functions that fail to be declared in header files. # #-Wmissing-declarations (C only) #Warn if a global function is defined without a previous declaration. Do so even if the definition itself provides a prototype. Use this option to detect global functions that are not declared in header files. # #-Wmissing-field-initializers #Warn if a structure's initializer has some fields missing. # #-Wmissing-noreturn #Warn about functions which might be candidates for attribute noreturn. Note these are only possible candidates, not absolute ones. Care should be taken to manually verify functions actually do not ever return before adding the noreturn attribute, otherwise subtle code generation bugs could be introduced. You will not get a warning for main in hosted C environments. # #-Wmissing-format-attribute #Warn about function pointers which might be candidates for format attributes. Note these are only possible candidates, not absolute ones. # #-Wpacked #Warn if a structure is given the packed attribute, but the packed attribute has no effect on the layout or size of the structure. # #-Wpadded #Warn if padding is included in a structure, either to align an element of the structure or to align the whole structure. Sometimes when this happens it is possible to rearrange the fields of the structure to reduce the padding and so make the structure smaller. # #-Wredundant-decls #Warn if anything is declared more than once in the same scope, even in cases where multiple declaration is valid and changes nothing. # #-Wnested-externs (C only) #Warn if an extern declaration is encountered within a function. # #-Wunreachable-code #Warn if the compiler detects that code will never be executed. # #-Winline #Warn if a function can not be inlined and it was declared as inline. Even with this option, the compiler will not warn about failures to inline functions declared in system headers. # #-Winvalid-pch #Warn if a precompiled header (see Precompiled Headers) is found in the search path but can't be used. # #-Wvolatile-register-var #Warn if a register variable is declared volatile. The volatile modifier does not inhibit all optimizations that may eliminate reads and/or writes to register variables. # #-Wdisabled-optimization #Warn if a requested optimization pass is disabled. This warning does not generally indicate that there is anything wrong with your code; it merely indicates that GCC's optimizers were unable to handle the code effectively. Often, the problem is that your code is too big or too complex; GCC will refuse to optimize programs when the optimization itself is likely to take inordinate amounts of time. # #-Wstack-protector #This option is only active when -fstack-protector is active. It warns #about functions that will not be protected against stack smashing.
Sunday, March 29, 2009
IEC 60730 Power Up Self-Tests
I was asked this week what I knew about "a self test at power up according standard IEC61508". First thing I can tell you is that Functional safety of electrical/electronic/programmable electronic safety-related systems has a price tag of over $1200! I always find the high prices of these numerous standards extremely frustrating and expensive.
In the past I was involved with the creation of reports, Programmable Electronic Mining Systems: Best Practice Recommendations (In Nine Parts) for the Centers for Disease Control (CDC)/ National Institute for Occupation Safety and Health (NIOSH) Mining Division. These reports draw heavily from International Electrotechnical Commission (IEC) standard IEC 61508 [IEC 1998a,b,c,d,e,f,g] and other standards. They are in the public domain, and can be found at my hardware site.
The newer standard, IEC 60730 is also mandating power up self-tests. You can preview what you are getting for your big bucks here.
The IEC 60730 safety standard for household appliances is designed for automatic electronic controls, to ensure safe and reliable operation of products. I always find it a bit ironic that now things like our refrigerator and dishwasher, have more stringent standards than some of the devices that really can kill us.
IEC 60730 segments automatic control products into three different classification:
- Class A: Not intended to be relied upon for the safety of the equipment.
- Class B: To prevent unsafe operation of the controlled equipment.
- Class C: To prevent special hazards.
Hardware:
- Independent clocked Watchdog Timer - this provides a safety mechanism to monitor:
- The flow of the software
- Interrupt handling & execution
- CPU clock too fast, too slow and no clock
- CRC Engine when available - this provides a fast mechanism for:
- Testing the Flash memory.
- Check on serial communication protocols such as UART, I2C, SPI.
Software:
- CPU Register
- Program Counter
- Flash CRC Using software and/or hardware CRC engines
- RAM Tests
- Independent Watchdog Timeout
Safety regulations and their impact on MCUs in home appliances has a short introduction to 60730.
Fortunately for us several companies have implemented IEC 60730 compliant libraries. Listed alphabetically:
- Atmel: AVR998: Guide to IEC60730 Class B compliance with AVR microcontrollers.
- Frescale: IEC 60730: Automatic electrical controls for household and similar use. Along with AN3257: Meeting IEC 60730 Class B Compliance with the MC9S08AW60.
- Liminary Micro [Now part of TI]: Stellaris IEC 60730 Library.
- Microchip: Class B Safety Software Library for PIC MCUs and dsPIC DSCs
- NEC: Application Note IEC60730 Class B Support for certification, at 88 pages is the longest of them all.
- ST: claims to have STM32 60730 MISRA compliant library, however I could find no such library in a quick search.
- TI: How IEC 60730 Impacts Appliance Design and MCU Selection.
- Renesas: IEC 60730-1 Standard.
- Zilog: Z8FMC16100 Series of Flash MCUs PB020302-0207 Motor Control Library Class B Compliant.
What all of these tests fail to address in any meaningful way is what happens when a power up test fails? Best you can hope for is that you have a beeper or LED hooked up directly to a Micro pin that you can blink or beep. For example if you find that your accumulator has a stuck bit, you are hosed as at that point. You can not guarantee that anything you do is going to be correct.
There is also the problem of the trade off of being thorough with exhaustive tests, verses being fast. Some standards such as NFPA mandate that the system must be operational in under one second to complicate maters even further. I did have a micro one time that did have a hardware failure. The XOR instruction was broken, but only on certain bit combinations. Every other aspect of the part worked just fine. It took days to debug that problem. As at the time the micro in question was hard to get and expensive, swapping it first was not an option.
One closing thought is that you need to be vary wary of simple RAM tests. Writing 0xAA/0x55 tells you almost nothing about open address lines etc.
Saturday, March 28, 2009
Anyone want to do a term paper on CRCs?
Do you know any Math Majors that need a subject for a term paper?
The Embedded System community needs one written on CRCs that is practical rather than pedagogical like the texts books that address the subject. Here is a sample paper.
Resulting paper should have practical answers that people in embedded systems land like myself can understand and use. Reading Polynomials over Galois Fields tend to makes my eyes glaze over. My speed of Mathematics is more that of Trachtenberg Speed System of Basic Mathematics.
What brought this on today is the new Atmel XMega processor that I'm designing with, that uses the CRC polyonmial: x^24 + 4x^3 + 3x +1. That polynomial does not seem to be any of the standard ones, so what are its error detection properties?
Polynomial's have to have certain properties, while they may all be primes, not all primes make good CRC's. For example the properties that make good CRC polynomials will make a very bad random number generator, and vice-versa. Both are done with multi-tap shift registers. CRC generators do NOT generate maximal-length sequences. In fact, the polynomials are deliberately chosen to be reducible by the factor X + 1, because that happens to eliminate all odd-bit errors. -- Embedded Systems Programming Jan/1992 Jack Crenshaw. I admittedly have never understood why the "good ones" are the good ones. More of the math vs get the work done.
For some background take a look at these papers:
- The Great CRC Mystery by Terry Ritter.
- A Painless Guide to CRC Error Detection Algorithms.
- Efficient CRC calculation with minimal memory footprint By Yaniv Sapir and Yosef Stein.
- Slow and Steady Never Lost the Race by Michael Barr.
- Accelerating algorithms in hardware By Lara Simsic.
- Boost CRC Library.
- Several others.
I know that CRC is good only over a certain block length, but what is that block length? The syndrome length? Syndrome length-1?
One article stated "a 16 bit CRC is good for 4K bits minus one"; I have not figure out how that works out, so I question its accuracy.
I want to CRC my code in Flash, however I don't want to use a 16 bit CRC if I really should be using a 32 bit CRC. I know the odds of this making any real difference is minuscule, but never want to give those Lawyers an opening.
Andrew Tannenbaum, in Computer Networks is often quoted talking about 16 bit CRC being "99.9998%" good at detecting.... but how do you calculate these percentages for CRC's of various length and more importantly the polynomial in use?
Since we are doing polynomial division and the CRC is the residue of that division there will be many CRC's that have the same answer, which is not what you want. This is why longer CRC's are better over longer bit runs.
From Tannenbaum, in Computer Networks:
- Detect all single bit errors.
- Detect all occurrences of two single-bit errors for frames less than 2n-1 bits in length.
- Detect all odd number of bits errors.
- Detect all burst errors with a length less the n.
- Detect all but 1/2n-1 burst errors of length n + 1.
- Detect all but 1/2n other errors.
Where n = number of bits in CRC.
See also Algebraic Codes for Data Transmission, Cambridge University Press, 2002.
I've spent several years looking actually for some of these CRC answers, even in real books such as Algebraic Codes for Data Transmission, Cambridge University Press, 2002. The books that I have found are already written for people that understand the math, rather than people like me that just want to get the job done, and want to cite a reference in the source code.
My random CRC crib notes collected over many years:
- "Cyclic code for error detection" by W. Peterson and D. Brown, Proc. IRE, Vol 49, P 228, Jan 1961. This is the oldest reference to CRC I could find, and the most obtuse as far as 'getting the work done vs math'.
- "Error Correcting Codes" W. Peterson, Cambridge, MA MIT PRess 1961.
- Tannenbaum, Andrew. Computer Networks, 128-32. Englewood Cliffs, NJ Prentice-Hall 1981.
- "Technical Aspects of Data Communications", by McNamara, John E. Digital Press. Bedford, Mass. 1982
- Ramabadran T.V., Gaitonde S.S., A tutorial on CRC computations, IEEE Micro, Aug 1988.
- Advanced Data Communication Control Procedure (ADCCP). Federal Register / Vol. 47, No. 105 / Tuesday, June 1, 1982 / Notices
- CRC-32 (USA) IEEE-802: Polynomial $04C11DB7: X32 + X26 + X23 + X22 + X16 + X12 + X11 + X10 + X8 + X7 + X5 + X4 + X2 + X +1
- $DEBB20E3: PKZIP
- CRC-CCITT V.41 Polynomial $1021 X16 + X12 + X5 + 1
- "CRC generators do NOT generate maximal-length sequences. In fact, the polynomials are deliberately chosen to be reducible by the factor X + 1, because that happens to eliminate all odd-bit errors." -- Embedded Systems Programming Jan/1992 Jack Crenshaw
- 16-Bit CRC can detect:
- 100% of all single-bit errors
- 100% of all two-bit errors
- 100% of all odd numbers of errors
- 100% of all burst errors less than 17 bits wide
- 99.9969% of all bursts 17 bits wide
- 99.9985% of all burst wider than 17 bits (the same as the checksum)
All burst errors of 16 or fewer bits in length and all double-bit errors separated by fewer than 65,536 bits (or 8192 bytes). Dr. Dobb's Journal, May 1992 Fletcher's Checksum by John Kodis.
For $1021: "T" $1B26 "THE" $7D8D "THE,QUICK,BROWN,FOX,01234579" $7DC5
Byte-wise CRC Without a table, Crenshaw 1992 [This is the one I use the most, because it fits in 2K parts, where I rewrote it in C, and AVR ASM.]
B:Byte CRC:16 Bit unsigned B := B XOR LO(CRC); B := B XOR (B SHL 4); CRC := (CRC SHR 8) XOR (B SHL 8) XOR (B SHL 3) XOR (B SHR 4); Build Table: I:Index 0->255 Z:Byte Z := I XOR (I SHL 4); Table[I] := (Z SHL 8) XOR (Z SHL 3) XOR (Z SHR 4); Update CRC using Table: CRC := (CRC SHR 8) XOR Table[ Data XOR (LO(CRC) ];
"Calculating CRCs by Bits and Bytes" by Greg Morse; Byte Magazine September 1986.
CRC is ones complimented, then transmitted least significant byte first. The resulting magic number via a quirk of polynomial syndromes will always be $F0B8 if there where no errors. [No math book I've read has even mentioned it, let a alone explain it, but it is what I look for in all of my CRC code for "good" vs "bad" blocks.]
T = Dx XOR Rx U = T7 T6 T6 T4 XOR T3 T2 T1 T0 CRChi = R15 R14 R13 R12 R11 R10 R9 R8 CRClo = R7 R6 R5 R4 R3 R2 R1 R0 Data = D7 D6 D5 D4 D3 D2 D1 D0 T = T7 T6 T5 T4 T3 T2 T1 T0 U = U7 U6 U5 U4 0 0 0 0 Bit *15 14 13 12 11 *10 9 8 7 6 5 4 *3 2 1 0 #1 R15 R14 R13 R12 R11 R10 R9 R8 #2 U7 U6 U5 U4 T3 T2 T1 T0 #3 U7 U6 U5 U4 T3 T2 T1 T0 #4 U7 U6 U5 U4
Line 1 is CRChi moved into CRClo; line 2 is the high nybble of U and the low nibble of T; line 3 is the line 2 byte shifted left by 3 bits; and live 4 is U shifted right by 4 bits.
If byte is "T" ($54), CRC = $FFFF, then answer should be $1B26.
Cyclic Redundancy Checks:
With a properly constructed 16-bit CRC, an average of one error pattern will not be detected for every 65,535 that would be detected. That is, with CRC-CCITT, we can detect 99.998 percent of all possible errors.
It is precisely this paragraph that lead me to ask the original questions:
"It should be noted that CRC polynomials are designed and constructed for use over data blocks of limited size; larger amounts of data will invalidate some of the expected properties (such as the guarantee of detecting any 2-bit errors). For 16-bit polynomials, the maximum designed data length is generally 2^15 - 1 bits, which is just one bit less than 4K bytes. Consequently, a 16-bit polynomial is probably not the best choice to produce a single result representing an entire file, or even to verify a single EPROM device (which are now commonly 8K or more). For this reason, the OS9 polynomial is 24 bits long."
"By some quirk of the algebra, it turns out that if we transmit the complement of the CRC result and then CRC-process that as data upon reception, the CRC register will contain a unique nonzero value depending only upon the CRC polynomial (and the occurrence of no errors). This is the scheme now used by most CRC protocols, and the magic remainder for CRC-CCITT is $1D0F (hex)."
No reference has every explained this "quirk". $1D0F is more commonly expressed as $F0B8, reverse bit order.