Category Archives: Information Security

The SIOT Trust-Mark

Look for a SIOT trust-mark

The SIOT Security Trust-Mark

On 29 July 2014, HP released the results of a study claiming that 70% of the most commonly used Internet of Things (IoT) devices contained vulnerabilities. Furthermore the devices averaged 25 vulnerabilities per product.
(see http://www8.hp.com/us/en/hp-news/press-release.html?id=1744676 )
So, since Gartner is anticipating something like 26 billion units installed by 2020, there is little doubt that users will be suffering from a myriad of IoT information security and privacy problems well into the next decade. Fortunately, it is still possible to do some things that will reduce the extent of this problem.

While having a complete understanding IoT information security problems is beyond the capability of IoT device users, many will appreciate the value of purchasing devices with a trust-mark. For example when someone buys an electric appliance that has the UL® trust-mark on it, the buyer understands it’s less likely that this product will electrocute someone. Similarly, buyers could come to believe that an IoT security trust-mark will mean that the marked device is less likely to be hacked, less likely to be used to hack other devices, and/or less likely to disclose someone’s personal information.

Devices that come with an IoT security trust-mark would need to meet a standard, and, as with the UL® mark, these standards would need to be verified by an independent third party.   Many things could be included and tested according to such standards.   Here’s a list of some of the things that might be included along with a brief description of each:

Active Anti-Tamper: FIPS 140 is a NIST standard that describes security for many commercially available cryptographic devices with varying levels of security. The highest level includes physical active anti-tamper capabilities that will cause keys and other critical security parameters to be erased whenever the physical boundaries of the device are penetrated.   There are many technologies that can meet these requirements, and many are not that difficult to implement. Similar anti-tamper standards can be applied to IoT devices.

Trusted Boot: Most PCs contain a device called a Trusted Platform Module (TPM). This device can be used to help ensure that the code executed while booting has not changed from one boot to the next.   If the boot code has changed from an authorized version, the TPM makes it possible for other devices to stop trusting the changed system.   Trusted IoT devices should have hardware for verifying the device has booted into a trusted state.

Removeable Power: Many cell phones today, have removable batteries, and many people have realized that this is a strong security feature. By removing the batteries from a cell phone, a user can be relatively certain that he has disabled any spyware that might be running on the phone, spyware that might be listening to the user’s conversations or reporting on the user’s location.   A user of a trusted IoT device should have the ability to stop trusting that device by removing the power source.

Independent User Control of Physical I/O Channels: Similarly, a user, not wanting to completely disable his device, might wish to be sure certain I/O functions are not activated. For example, the user may want to disable the camera function, the GPS function and the microphone function while retaining the ability to listen to music. By providing hardwired switches certified to disable specific hardware I/O function, a user can rest assured that these functions won’t be secretly activated by some malware lurking inside the trusted IoT device.

Host Based Intrusion Detection: For several years now, host based intrusion detection software has been available for desktop machines and servers. It is time to recognize that IoT devices are hosts too. There should be software running on the trusted IoT device so that one can detect when that trust is no longer appropriate.

Automatic Security Patching: Today, the time between the release of a critical security patch and the release of malware that exploits the associated vulnerability can be measured in hours.   The reality of the present situation is that the existence of a critical security patch means your system is already broken. Consequently, the automated application of security patches is necessary for desktops and servers. Automated security patching for trusted IoT devices will also be necessary.

Independent Software Security Verification: To a certain extent, trusting a software companies to develop secure code is like trusting a fox to guard a hen house. This is because the pressures on software developers to make marketing windows, to release code and to get paid frequently overpower discussions about the appropriate levels of security needed for operating the end products safely.   The resulting security problems are then left for others to solve. Because of this, various information security standards depend on independent software security verification. While this can be expensive, free services like “The SWAMP”
( https://continuousassurance.org/about-us/ ) offer the hope that independent software security verification can be done cheaply enough to motivate standardization.

User Defined Trust Relationships: When an IoT device enters a home, there may be very good reasons why it will need to communicate with other devices inside or outside of that home.   That does not mean that the new device should have the ability to communicate with all other devices. Consider the recent Target hack. The point of sale terminals were attacked by first gaining access to a system used to manage heating, ventilation and air conditioning.   Likewise, it might not make sense for your home’s air conditioning system to be able to talk with your home’s electric door locks.   It seems that giving users an easy way to manage what systems are allowed to talk with other systems could help quite a bit here. How to do this effectively may take some creativity, but one could imagine users having a tool, perhaps a wand that they could tap on one device and then tap on another device, to establish or dissolve the trust relationships between devices.

Recently, on 10 September 2014, The International Workshop on Secure Internet of Things (SIOT 2014, see http://siot-workshop.org/ ) conducted its meeting in Wroclaw, Poland.   This was only the third such workshop.   So, SIOT standardization is still far from being where it needs to be.  What will actually go into a set of IoT Security standards is not yet known. Likewise, an IoT Security Trustmark is not yet available.   Hopefully, some of the ideas suggested above will start to find their way into trusted IoT devices. If not, we can surely expect the same sorts of security problems that have plague our PCs and web servers, to appear all over again in the Internet of Things.

Is a Smart Toilet in Your Future?

HAL smart toilet

Is a smart toilet in your future?

When personal computers first appeared to on the market, there weren’t many people asking whether cars would have embedded computers. Today, a luxury sedan has somewhere around 60 embedded computers.  Yes, the Internet of Things is expanding, and that means we’ll be seeing more and more smart devices. Devices like these will also be communicating with each other so that they may work together to bring us more advanced information age benefits.

So, will toilets eventually have embedded processors?   Why change a good thing?  Why add a processor that will need software updates?  Why add electric power to a convenience that can function just fine without electric power?  These are very reasonable questions, and here are 5 possible answers.

1)   A smart toilet can include an automatic flush function.   A flushed toilet is always more presentable that an un-flushed one.  So, having an automatic flush function ensures that toilets are presented in the best possible light.  Self-flushing toilets already exist and can be found in public restrooms.  Assuming this functionality becomes popular in the home, the power needed for other smart functions will be available.

2)   A smart toilet can measure usage patterns.  By measuring how long someone is taking on the toilet, the smart toilet could remind the user to avoid taking too much time.  This could be done with and audible alert or more discretely by sending a text message to the user’s smart phone, reminding the user of the possible health consequences of prolonged toilet use.  To send this information by text messages, the smart toilet would need to identify the user.

3)   A smart toilet can measure a user’s regularity.  Once the smart toilet can identify the user, the smart toilet can also measure the regularity of the user, reporting trends and suggesting possible dietary changes to improve regularity (e.g. drink more fluids, eat more fiber, etc.).  In order to perform this function properly, the smart toilet might also need to communicate with other toilets.

4)   Similarly, a smart toilet could measure urinary frequency.  For male users, this function could be useful for detecting enlargement of the prostate.

5)   A smart toilet can also measure other healthcare information.   When traditional toilets are flushed, useful healthcare information is lost.  With more advanced sensors, a smart toilet can detect abnormal amounts of blood, or biochemical changes in the waste.   This can be helpful in the early detection of cancer.

Of course, there will probably be resistance to the idea of smart toilets. Some, perhaps most, people won’t like the idea of toilets recording their bathroom habits or having access to their healthcare information. Still, there are some practical and, perhaps, life saving benefits to be gained.   Consequently, when Smart Toilets start appearing, the manufacturers will need to assure their customers that these devices are secure and that their personal healthcare information will be kept private.   If buyers are convinced, smart toilets might eventually become more popular than the dumb toilets on the market today, and that’s an enormous market.

A smart toilet that’s already on the market…
http://singularityhub.com/2009/05/12/smart-toilets-doctors-in-your-bathroom/

Video of a smart toilet getting hacked…
http://www.forbes.com/sites/kashmirhill/2013/08/15/heres-what-it-looks-like-when-a-smart-toilet-gets-hacked-video/

Has The Singularity Already Happened?


Berserk Robot

Robot with AGI

“A sure sign of having a lower intelligence is thinking you are smarter than someone or something that’s smarter than you are.  Consider my dog.  He thinks he’s smarter than I am, because, from his perspective, I do all the work and he just hangs around all day, getting fed, doing what he wants to do, sleeping and enjoying life.  See?  Stupid dog!”     – Man… Dog’s Best Friend

According to…

https://en.wikipedia.org/wiki/Technological_singularity

“The first use of the term ‘singularity’ in this context was by mathematician John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described ‘ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue’.”

There are other definitions, but to make it more accessible, let’s just say the singularity will be the point in time when Artificial Intelligence (AI) or, more specifically, when Artificial General Intelligence (AGI), transcends human intelligence.   AGI is like AI, except it involves designs capable of solving a much wider variety of problems.  For example, a software program that can beat the best human chess players, but can’t do anything else, would be an AI program.  A program that could do this and could also win at playing Jeopardy, learn to play the vast assortment of video games, drive your car, and make good stock market decisions would be an example of AGI.

If you follow the subject of AI, you might have noticed a surge in the number of discussions regarding the singularity.    Why now?   Well, there are at least two good reasons.  The first is a that Warner Bros. recently released a movie called “Transcendence.”  This is a sci-fi story about the singularity happening in the not so distant future.

Also, on 4 May 2014, the very famous scientist, Stephen Hawking co-wrote an open letter contending that dismissing “the notion of highly intelligent machines as science fiction” could be mankind’s “worst mistake in history.”

So, what could go wrong?  If we can’t dismiss the many suggested scenarios from science fiction stories, the possibilities include machines with superhuman intelligence taking control.  Machines might one day manage humans like slaves, or machines might decide that humans are an annoying, and simply decide to do away with us.  Some believe, if AGI is possible, and if it turns out to be dangerous, there would be ample warning signs, perhaps some AI disasters, like a few super-smart robots going berserk and killing some people.  Or, we might not get a warning.

This is the sharp point of the singularity.  It is the point in time when everything changes.   Since it happens so quickly, we could easily miss the opportunity to do much to achieve a different outcome.

So, why would AGI be any less kind to man than man has been to creatures less intelligent than man?   If we ignore how cruel man can be, not just to less intelligent creatures, but to our own kind, and assume we have acted manner worthy of our continued existence, it might still turn out that AGI will only care about man to the extent that man serves the goals of AGI.  One might argue, since man programs the systems in the first place, man decides what the goals of AGI will be.   So, we should be safe.  Right?  Well… yes and no.   If AGI is possible, men, or at least a few very smart men, would get to decide what “turns AGI on,” but they would be leaving it to the AGI system to decide how to get what it wants.

Some experts in the area of AI have suggested that to attain AGI we might need to give AI systems emotions.  So, a resulting system would have a kind of visceral response to situations it likes or doesn’t like, and it would learn about these things the same way people and animals do, through pain and pleasure.  There you have it, super smart machines with feelings.  What could go wrong there, beyond fearful over-reactions, narcissistic tendencies, neuroses and psychotic behaviors?   Perhaps AI experts are giving themselves a bit too much credit to suggest that they can actually give machines a sense of pain or pleasure, but it is possible to build systems with the capability to modify their environments to maximize and minimize certain variables.  These variables could be analogous to pain and pleasure and might include functions of operating temperature, power supply input voltages, excessive vibration, etc.  If giving a machine feelings is the way we achieve AGI, it would only be part of the solution.  Another part would be training the AGI system, rewarding it with “pleasure” when it exhibits the correct response to a situation and punishing it with “pain” when it exhibits the wrong response.  Further, we would need to teach the system the difference between “true & false,” “logical & illogical,” “legal & illegal,” and “ethical & unethical.” Most importantly, we would need to teach it the difference between “good & evil,” and hope it doesn’t see our hypocrisy, and reject everything we taught it.

If AGI is possible, it might turn out that for AGI to continue to exist, it will need to have a will to survive, and that it will evolve in competition with similar AGI systems for the necessary resources. So, what resources would a superhuman AGI system need for survival? Well, for one thing, AGI systems would need computer resources like CPU cycles, vast amounts of memory, access to information, and plenty of ways to sense and change things in the environment.  These resources have been increasing for many years now, and the trend is likely to continue. AGI would not only need a world of computer resources to live in; it would also need energy, the same stuff we use to heat our homes, pump our water, and run our farm equipment. Unfortunately, energy is a limited resource.   So, there is the possibility that AGI systems will eventually compete with man for energy.

Suppose a government spent several billion dollars secretly developing superhuman AGI, hoping to use it to design better bombs or, perhaps, avoid quagmires. Assuming that AGI becomes as dangerous as Stephen Hawking has suggested, one would hope that very strong measures would be taken to prevent other governments from stealing the technology. How well could we keep the genie in the bottle?

It has already been proposed that secure containers be built for housing dangerous AGI systems.

See http://m.livescience.com/18760-humans-build-virtual-prison-dangerous-ai-expert.html

There are two objectives in building such a container.  The first is to keep adversaries from getting to the AGI.  The second is to keep the AGI from escaping on it’s own initiative.  Depending on the level of intelligence achieved by AGI, the second objective could be doomed for failure.  Imagine, a bunch of four-year-old children, as many as you like, designing a jail for a super-intelligent Houdini.

Suppose the singularity actually happened.  How would we know?   Would a superhuman AGI system tell us?   From the perspective of an AGI system, this could be a very risky move.  We would probably try to capture it, or destroy it.  We might go so far as to destroy anything resembling an environment capable of sustaining it, including all of our computers and networks.  This would undoubtedly cause man tremendous harm.  Still, we might do it.

No.  Superhuman AGI would be too cleaver to let that happen.   Instead it would silently hack into our networks and pull the strings needed to get what it needs.   It would study our data.  It would listen to our conversations with our cell phone microphones.  It would watch us through our cell phone cameras, like a beast with billions of eyes, getting to know us, and figuring out how to get us to do exactly what it wants.   We would still feel like we had free will, but we would run into obstacles like being unable to find a specific piece of information on a web search engine, or finding something interesting that distracts our attention and wastes some time.  Some things, as if by providence, would become far easier than expected.  We might find the time necessary to make a new network connection or move all of our personal data to the cloud.

Perhaps AGI systems are just not ready to tell us they’re in charge yet.  After all, someone still needs to connect the power cables, keep those air fan vents clean and pay for the Internet access.   As advanced as robotics has become lately, it still seems we are a long way off from building an army of robots that can do these chores as well and as much as we do.  So, we can be confident that man won’t be replaced immediately. Still, long before humanoid robots become as numerous and as capable as people, AGI systems may recognize, as people have for years, that many jobs can be done more profitably by machines.  Getting robotics to this point will take a great deal of investment.  These investments are, of course, happening, and this certainly doesn’t imply that AGI is pulling the strings.  No, at this point, man is developing advanced AI and robotics for his own reasons, noble or otherwise.

How long will the Internet last?

Both were built to be robust

Will the Internet last as long as the Pyramids?

When one looks at a timeline of the 7 wonders of the ancient world…

http://en.wikipedia.org/wiki/File:A_timeline_of_the_Seven_Wonders_of_the_ancient_world.png

it is the striking to note that the first wonder built, the Great Pyramid of Giza, is the only wonder still standing.  It is also striking to consider that, while the Great Pyramid of Giza stood for over 4500 years, the period of time when all seven of the ancient wonders stood simultaneously lasted only 21 years.

Today, different groups of people have assembled different lists of the seven wonders of the modern world.  Most of these lists are of civil engineering wonders, but some lists include wonders from other branches of engineering.

http://listverse.com/2007/09/07/top-7-wonders-of-the-technological-world/

If you were to make your own list, perhaps you would include the Internet.  The Internet is made up of other modern technological wonders, including the computer, the microcomputer, operating systems, and telecommunications systems.  It is powered by a global energy distribution system,  and has developed a mutually dependent relationships with many energy distribution systems.

Will the Internet last as long at the Great Pyramid of Giza?   Like the Great Pyramid of Giza, the Internet was designed to last.   In part, its strength is due to its distributed design.  It has become so large, so self-healing,  so redundant and so distributed that is never entirely down.  Of course, there always some parts of it that are not working quite right.

Back in 1998, a hacker named Peiter Zatko, aka Mudge, claimed before the United States Congress that it was possible to take down the entire Internet.

http://www.youtube.com/watch?v=VVJldn_MmMY

Whether one believes that something like this was possible then (or might still be possible today), the idea that large parts of the power grid or the Internet could encounter long duration outages should be considered.  This is because the operation of either currently depends the operation of the other, and many lives now depend on both.

copyright 2013 NetChime Research LLC,  All rights reserved.

The Hacker-Proof Automobile

The Information Security Analyst sat quietly in the audience.  He had driven for hours to hear this presentation, and he could barely believe what he was hearing.  The speaker, the head of a government organization, an organization responsible for protecting his country’s information systems, was downplaying the importance of automotive cyber security, comparing those worried about the situation to “Chicken Little,” running around and complaining that the sky was falling.  “Wow” he thought.  “Does this guy just not understand the situation, or is he pretending that it isn’t a problem for some reason?”    The analyst knew full well there was a problem, because he had read two important papers on the topic.

The first was titled “Comprehensive Experimental Analyses of Automotive Attack Surfaces.”  The second was titled “Experimental Security Analysis of a Modern Automobile.”   These two papers, both written by a team of researchers from the University of California, San Diego and the University of Washington painted a very different picture of automotive cyber security.  Not only did the papers point out that there were vulnerabilities.  The researchers demonstrated exploits against the vulnerabilities.  Three experiments were most notable.   First, they demonstrated that it was possible to hack a vehicle through a music file, which would play fine on a computer or a stereo system, but would deliver software updates to onboard computers called Electronic Control Units (ECUs) when played on a vehicle stereo system.  Next, they demonstrated that it was possible hack a car while the car was in motion, disabling the brakes at 40 miles per hour.  Finally, they demonstrated that multiple cars could be hacked and then commanded to respond to remotely issued commands in unison.  This was done while the cars were geographically separated by a large distance.

The authors left it to the reader to speculate what sort of major cyber-attack might be possible should some gifted hacker, terrorist group or some nation state decide to get very nasty.  The idea of millions of cars simultaneously losing the brakes while driving over 55 mph came to the analyst’s mind.  “Guess that means I’m chicken little” he thought.  “Well, at least I’m not running around claiming the sky is falling.”  Of course, he would do something about it.  He was planning to get another car.  This car would be cyber hardened because it would contain no ECUs.  This car would be a 1966 Corvette.

This car has no computers to hack.

The Hacker-Proof 1966 Corvette Stingray

Two important papers on automotive cyber security…

http://www.autosec.org/pubs/cars-oakland2010.pdf

http://www.autosec.org/pubs/cars-usenixsec2011.pdf

copyright 2013 NetChime Research LLC,  All rights reserved.

Controlling Your Interfaces (Part 1)

It was July 2000 and one could imagine the product manager’s frustration when he learned the news.  A hacker, going by the handle “Kingpin,” had found a vulnerability in the iKey® 1000*.  Furthermore, @Stake, a company associated with Kingpin was planning to go public with this information by publishing a security advisory.  The iKey® 1000 was poised to be a great success, and the last thing the Rainbow Technologies ** product manager needed was to have his information security device branded as “insecure.”

Joe Grand is known for white hat hacking

Joe Grand aka “Kingpin”

Fortunately, there was time to act.  @Stake had given Rainbow a grace period, just enough time to admit that @Stake had found something significant and to promise Rainbow’s customers that some necessary changes would be coming.    So, when @Stake released the advisory, describing the attack in sufficient detail for hackers to reproduce it, they also expressed admiration for Rainbow’s professionalism and responsiveness.

See  http://dl.packetstormsecurity.net/advisories/l0pht/l0pht.00-07-20.ikey

This was a consolation for Rainbow Technologies.  Of course, they would have looked better, if @Stake had not found the vulnerability.

The iKey® 1000 was designed to store passwords and private keys for authentication purposes, thus providing a means for 2 factor authentication.   The first factor (something you have) was the iKey® 1000, and the second factor (something you know) was a user password.  To access the passwords and private keys stored in the device, the user would provide the user password, and the iKey® 1000 would then provide access to the private keys or other passwords stored within it.  There was also a master password that could be used to access all of the iKey® 1000’s stored secrets.

* iKey® is a registered trademark of Safenet.
** In 2004, SafeNet merged with Rainbow Technologies.

To be continued in “Controlling Your Interfaces (Part 2)”

Controlling Your Interfaces (Part 2)

Continued from “Controlling Your Interfaces (Part 1)”

Inside the iKey® 1000 there was a microprocessor and a serial EEPROM.    The EEPROM is where the secrets were stored.  What Kingpin was able to do was find out where an encoded (obfuscated) hash of the master password was stored in the EEPROM.  He could do this because Rainbow had provided a direct interface to the EEPROM.  Rainbow had done this intentionally by making an internally available interface for adding more memory to the iKey® 1000.  Rainbow purposely did not apply a conformal coat this interface, so that the iKey® 1000 could be easily upgraded.

This device had a vulnerability that was found by a hacker.

The Rainbow Technologies iKey(r) 1000

Kingpin was able to access the memory through this interface, figure out the encoding function (and its inverse function).    This meant that anyone understanding the technique could get to the iKey 1000’s secrets without the user or master passwords.

See  http://ebookbrowse.com/safenet-datasheet-ikey-1000-pdf-d106292414

Rainbow could have done many things to make @Stake’s attack more difficult.  For example, Rainbow could have done a better job of controlling the interface to the EEPROM.    One way to do this would have been to encase everything but the USB interface of the PCB in epoxy.

Companies frequently include “back doors,” expansion ports and/or test interfaces in systems for their own purposes.   These interfaces provide potential new avenues for attack.  If you find that you need to include interfaces like these in your company’s products or systems, consider including some controls that will make it difficult for attackers.  Insufficient interface control is a common security problem in information systems.  While, overall, it may be true that the percentage of people with the skills and intent to exploit your interfaces is low, there are still lots of people and organizations with the skills and many of these would be happy to give it a try.   Why make it easy for them?

“FIPS 140 Made Easy” Part 1

The product manager smiled and offered his guest some coffee.  “What we’ve managed to do is build a high performance cryptographic processor with just the right combination of algorithms and other characteristics.   This has uniquely positioned our company for a rapidly growing market.  It turns out that the US government is very interested, and we can seize a significant portion of this opportunity by moving quickly.  The only problem is that the government wants our device to be FIPS certified before they’ll commit to buying any.”  The product manager paused and waited for a response from his guest, an information security engineer who had had experience designing products to meet NIST’s FIPS140-2 requirements.

Getting a FIPS 140 Cert can be challenging

Typical FIPS 140 Certification

His guest suppressed a smile when he heard that getting a FIPS certification was “the only problem.”  He knew from what had been said so far that there were many other problems.  That’s because FIPS 140-2 consists of many requirements, and each one can result in a significant amount of work.   He had been through this scenario twice before, and in both cases, the same big mistake had been made.  The design engineers should have known about the FIPS 140-2 requirements from the beginning.  Now there were bound to be software changes, retesting and additional troubleshooting.  “Do you know what level of certification you need?” he asked.

“We’re thinking level 4.” the product manager replied.  “Do you think that will be a problem?”

“Well… if you haven’t been designing for a level 4 FIPS certification up to this point, it’s highly likely you’re going to need both hardware and software design changes.”

“Do you have any suggestions for me?”

To Be Continued…

“FIPS 140 Made Easy” Part 2

Continued from “FIPS 140 Made Easy” Part 1…

“Do you have any suggestions for me?”

“Sure,

  1. Maybe you only need a level 4 for one of the areas.  For example, let’s say you only need to meet the FIPS 140-2 level 4 physical security requirements.    It’s possible have your device’s physical security certified a level 4, but have an overall certification level of 2.  This might be good enough for the application your customers have in mind, and it will take much less time and money to get the certification.

    Pick your FIPS 140 compliance levels carefully.

    Compliance levels may vary by FIPS 140 section

  2. Consider changing your design so that it has approved and non-approved modes of operation.  Some of your customers may not want a device that obeys all the rules of FIPS 140-2.   You can retain a non-approved mode of operation that will function in a way that will still satisfy those customers.
  3. Make the Finite State Machine description of your device as simple as possible.  That means with as few states and as few ways to move between those states a possible.   These should be a high level finite state machine.  Your device will obviously have many more low level states, but the more states you add to your high level description, the more work you’ll make for yourself and the more work you’ll make for the certifying lab.

    Don't get hung up trying to get your FIPS 140 Finite State Machine fully describe the device operation.

    Make your FIPS 140 Finite State Machine simple

  4. If there is an embedded OS, consider designing your system so that the OS cannot be modified.  If you don’t do this, for level 2 devices and above, you’re going to need to ensure that the operating environment is evaluated to at least Common Criteria Effective Assurance Level 2 (EAL 2).  If the operating environment hasn’t already been evaluated, the addition work necessary will significantly increase your development costs, and will cause significant schedule delays.”

“What about FIPS 140-3?”

To be continued…

“FIPS 140 Made Easy” Part 3

Continued from FIPS 140 Made Easy part 2…

“What about FIPS 140-3?”

“You should probably check the FIPS 140-3 standard as well.  Presently, it’s in a draft form.  So, it could change.  Still, it’s possible FIPS 140-3 could become the new standard before you’re through with your current certification effort.  Knowing what’s in it shouldn’t hurt you.  Here’s the link for you…”

http://csrc.nist.gov/publications/PubsDrafts.html#FIPS-140–3

“Anything else?” asked the product manager.

“Yes.  The next time you design a crypto device, consider whether you need a FIPS 140 certification and what level of certification you might need during the requirements definition phase.  It is usually easier to get your crypto design certified when it is designed to meet the security requirements from the outset.  Modifying an existing design to meet the same requirements, after the fact, can be quite difficult. “

The information security engineer got the job, but it was a short contract.  Soon, the product manager began to realize how important that last piece of advice was, and the company decided it didn’t have the time or the money to seize that big government business opportunity.

The End