The electronics test engineer was the first to enter the lab that morning. He turned on the radio and thought to himself “That’s the third time I’ve heard that song today, and it’s so monotonous.” It was time for a new station. He turned the tuning knob and soon realized he was scanning the AM band. “Why do the lab techs always insist on listening to AM stations?” He switched the radio to the FM band and was soon listening to his favorite golden oldies station. OK, it was time to get to work.
He applied power to the unit under test and a horrible buzzing sound filled the room. It was coming from the radio. He tried other FM stations on the radio. Each one was accompanied by the buzzing sound. “Oh…now I understand,” he thought. The prototype has been generating electromagnetic interference. That interference has been garbling the music on the FM stations. The lab techs were probably adapting to this by only listening to the AM stations.
To test his theory, he switched the radio back to the AM band, trying various stations. Each AM station came through loud and clear. The electronics test engineer had a problem, because his company would not be able to sell a device that interfered with FM broadcasts. He knew the prototype would eventually be sent to a special testing lab for an FCC certification. The electromagnetic compatibility (EMC) test technicians there would also detect the interference. He hoped he would be able to find the source of the problem and fix it before schedules and budgets were negatively impacted.
To be continued in “Egad… Electronic Interference is Emanating From the Prototype! (Part 2)”
Continued from “Egad… Electronic Interference is Emanating From the Prototype! (Part 1)”
He was reminded of his childhood, driving through the desert with his father, listening to music. They had crossed over a river by driving over (actually by driving through) a truss bridge. The radio went dead. “What happened to the music Dad?” he asked. His father, who had studied electronics in the Navy, said “Well son, this bridge is acting like a Faraday cage. The metal is shielding us from the radio waves coming from the AM radio station. “
The truss bridge acts like a Faraday cage
“Why can’t the radio waves get through the holes in the cage?” he had asked.
“You can’t see them, but radio waves have a size called a wavelength and a strength called an amplitude. These radio waves are too big and not strong enough to get very far through the holes. If we switch to an FM station, we’ll be receiving radio waves with a shorter wavelength. These wave are small enough to get through the holes, and we’ll hear the music again.” The old man flipped the AM/FM band selector on the radio to FM, turned the tuning knob and, sure enough, there was music again.
Remembering this, the electronics test engineer realized that a metal lid had been removed from the prototype to replace some read only memory (ROM) components. He selected the FM setting on the radio, and the buzzing returned. He found the lid, bolted it to the prototype, and the buzzing sound stopped. There really wasn’t a problem after all. The necessary shielding was in the design, and it had been removed during the ROM replacement. The electronic test engineer was thankful for his discovery. He had one less problem to worry about. He was also thankful for the nice memory of driving through the desert and being with his father.
Click here to see an interesting presentation on EMC.
It was July 2000 and one could imagine the product manager’s frustration when he learned the news. A hacker, going by the handle “Kingpin,” had found a vulnerability in the iKey® 1000*. Furthermore, @Stake, a company associated with Kingpin was planning to go public with this information by publishing a security advisory. The iKey® 1000 was poised to be a great success, and the last thing the Rainbow Technologies ** product manager needed was to have his information security device branded as “insecure.”
Joe Grand aka “Kingpin”
Fortunately, there was time to act. @Stake had given Rainbow a grace period, just enough time to admit that @Stake had found something significant and to promise Rainbow’s customers that some necessary changes would be coming. So, when @Stake released the advisory, describing the attack in sufficient detail for hackers to reproduce it, they also expressed admiration for Rainbow’s professionalism and responsiveness.
This was a consolation for Rainbow Technologies. Of course, they would have looked better, if @Stake had not found the vulnerability.
The iKey® 1000 was designed to store passwords and private keys for authentication purposes, thus providing a means for 2 factor authentication. The first factor (something you have) was the iKey® 1000, and the second factor (something you know) was a user password. To access the passwords and private keys stored in the device, the user would provide the user password, and the iKey® 1000 would then provide access to the private keys or other passwords stored within it. There was also a master password that could be used to access all of the iKey® 1000’s stored secrets.
* iKey® is a registered trademark of Safenet.
** In 2004, SafeNet merged with Rainbow Technologies.
To be continued in “Controlling Your Interfaces (Part 2)”
Continued from “Controlling Your Interfaces (Part 1)”
Inside the iKey® 1000 there was a microprocessor and a serial EEPROM. The EEPROM is where the secrets were stored. What Kingpin was able to do was find out where an encoded (obfuscated) hash of the master password was stored in the EEPROM. He could do this because Rainbow had provided a direct interface to the EEPROM. Rainbow had done this intentionally by making an internally available interface for adding more memory to the iKey® 1000. Rainbow purposely did not apply a conformal coat this interface, so that the iKey® 1000 could be easily upgraded.
The Rainbow Technologies iKey(r) 1000
Kingpin was able to access the memory through this interface, figure out the encoding function (and its inverse function). This meant that anyone understanding the technique could get to the iKey 1000’s secrets without the user or master passwords.
Rainbow could have done many things to make @Stake’s attack more difficult. For example, Rainbow could have done a better job of controlling the interface to the EEPROM. One way to do this would have been to encase everything but the USB interface of the PCB in epoxy.
Companies frequently include “back doors,” expansion ports and/or test interfaces in systems for their own purposes. These interfaces provide potential new avenues for attack. If you find that you need to include interfaces like these in your company’s products or systems, consider including some controls that will make it difficult for attackers. Insufficient interface control is a common security problem in information systems. While, overall, it may be true that the percentage of people with the skills and intent to exploit your interfaces is low, there are still lots of people and organizations with the skills and many of these would be happy to give it a try. Why make it easy for them?
The product manager smiled and offered his guest some coffee. “What we’ve managed to do is build a high performance cryptographic processor with just the right combination of algorithms and other characteristics. This has uniquely positioned our company for a rapidly growing market. It turns out that the US government is very interested, and we can seize a significant portion of this opportunity by moving quickly. The only problem is that the government wants our device to be FIPS certified before they’ll commit to buying any.” The product manager paused and waited for a response from his guest, an information security engineer who had had experience designing products to meet NIST’s FIPS140-2 requirements.
Typical FIPS 140 Certification
His guest suppressed a smile when he heard that getting a FIPS certification was “the only problem.” He knew from what had been said so far that there were many other problems. That’s because FIPS 140-2 consists of many requirements, and each one can result in a significant amount of work. He had been through this scenario twice before, and in both cases, the same big mistake had been made. The design engineers should have known about the FIPS 140-2 requirements from the beginning. Now there were bound to be software changes, retesting and additional troubleshooting. “Do you know what level of certification you need?” he asked.
“We’re thinking level 4.” the product manager replied. “Do you think that will be a problem?”
“Well… if you haven’t been designing for a level 4 FIPS certification up to this point, it’s highly likely you’re going to need both hardware and software design changes.”
“Do you have any suggestions for me?”
To Be Continued…
Continued from “FIPS 140 Made Easy” Part 1…
“Do you have any suggestions for me?”
- Maybe you only need a level 4 for one of the areas. For example, let’s say you only need to meet the FIPS 140-2 level 4 physical security requirements. It’s possible have your device’s physical security certified a level 4, but have an overall certification level of 2. This might be good enough for the application your customers have in mind, and it will take much less time and money to get the certification.
Compliance levels may vary by FIPS 140 section
- Consider changing your design so that it has approved and non-approved modes of operation. Some of your customers may not want a device that obeys all the rules of FIPS 140-2. You can retain a non-approved mode of operation that will function in a way that will still satisfy those customers.
- Make the Finite State Machine description of your device as simple as possible. That means with as few states and as few ways to move between those states a possible. These should be a high level finite state machine. Your device will obviously have many more low level states, but the more states you add to your high level description, the more work you’ll make for yourself and the more work you’ll make for the certifying lab.
Make your FIPS 140 Finite State Machine simple
- If there is an embedded OS, consider designing your system so that the OS cannot be modified. If you don’t do this, for level 2 devices and above, you’re going to need to ensure that the operating environment is evaluated to at least Common Criteria Effective Assurance Level 2 (EAL 2). If the operating environment hasn’t already been evaluated, the addition work necessary will significantly increase your development costs, and will cause significant schedule delays.”
“What about FIPS 140-3?”
To be continued…
Continued from FIPS 140 Made Easy part 2…
“What about FIPS 140-3?”
“You should probably check the FIPS 140-3 standard as well. Presently, it’s in a draft form. So, it could change. Still, it’s possible FIPS 140-3 could become the new standard before you’re through with your current certification effort. Knowing what’s in it shouldn’t hurt you. Here’s the link for you…”
“Anything else?” asked the product manager.
“Yes. The next time you design a crypto device, consider whether you need a FIPS 140 certification and what level of certification you might need during the requirements definition phase. It is usually easier to get your crypto design certified when it is designed to meet the security requirements from the outset. Modifying an existing design to meet the same requirements, after the fact, can be quite difficult. “
The information security engineer got the job, but it was a short contract. Soon, the product manager began to realize how important that last piece of advice was, and the company decided it didn’t have the time or the money to seize that big government business opportunity.