Category Archives: Systems Engineering

The SIOT Trust-Mark

Look for a SIOT trust-mark

The SIOT Security Trust-Mark

On 29 July 2014, HP released the results of a study claiming that 70% of the most commonly used Internet of Things (IoT) devices contained vulnerabilities. Furthermore the devices averaged 25 vulnerabilities per product.
(see )
So, since Gartner is anticipating something like 26 billion units installed by 2020, there is little doubt that users will be suffering from a myriad of IoT information security and privacy problems well into the next decade. Fortunately, it is still possible to do some things that will reduce the extent of this problem.

While having a complete understanding IoT information security problems is beyond the capability of IoT device users, many will appreciate the value of purchasing devices with a trust-mark. For example when someone buys an electric appliance that has the UL® trust-mark on it, the buyer understands it’s less likely that this product will electrocute someone. Similarly, buyers could come to believe that an IoT security trust-mark will mean that the marked device is less likely to be hacked, less likely to be used to hack other devices, and/or less likely to disclose someone’s personal information.

Devices that come with an IoT security trust-mark would need to meet a standard, and, as with the UL® mark, these standards would need to be verified by an independent third party.   Many things could be included and tested according to such standards.   Here’s a list of some of the things that might be included along with a brief description of each:

Active Anti-Tamper: FIPS 140 is a NIST standard that describes security for many commercially available cryptographic devices with varying levels of security. The highest level includes physical active anti-tamper capabilities that will cause keys and other critical security parameters to be erased whenever the physical boundaries of the device are penetrated.   There are many technologies that can meet these requirements, and many are not that difficult to implement. Similar anti-tamper standards can be applied to IoT devices.

Trusted Boot: Most PCs contain a device called a Trusted Platform Module (TPM). This device can be used to help ensure that the code executed while booting has not changed from one boot to the next.   If the boot code has changed from an authorized version, the TPM makes it possible for other devices to stop trusting the changed system.   Trusted IoT devices should have hardware for verifying the device has booted into a trusted state.

Removeable Power: Many cell phones today, have removable batteries, and many people have realized that this is a strong security feature. By removing the batteries from a cell phone, a user can be relatively certain that he has disabled any spyware that might be running on the phone, spyware that might be listening to the user’s conversations or reporting on the user’s location.   A user of a trusted IoT device should have the ability to stop trusting that device by removing the power source.

Independent User Control of Physical I/O Channels: Similarly, a user, not wanting to completely disable his device, might wish to be sure certain I/O functions are not activated. For example, the user may want to disable the camera function, the GPS function and the microphone function while retaining the ability to listen to music. By providing hardwired switches certified to disable specific hardware I/O function, a user can rest assured that these functions won’t be secretly activated by some malware lurking inside the trusted IoT device.

Host Based Intrusion Detection: For several years now, host based intrusion detection software has been available for desktop machines and servers. It is time to recognize that IoT devices are hosts too. There should be software running on the trusted IoT device so that one can detect when that trust is no longer appropriate.

Automatic Security Patching: Today, the time between the release of a critical security patch and the release of malware that exploits the associated vulnerability can be measured in hours.   The reality of the present situation is that the existence of a critical security patch means your system is already broken. Consequently, the automated application of security patches is necessary for desktops and servers. Automated security patching for trusted IoT devices will also be necessary.

Independent Software Security Verification: To a certain extent, trusting a software companies to develop secure code is like trusting a fox to guard a hen house. This is because the pressures on software developers to make marketing windows, to release code and to get paid frequently overpower discussions about the appropriate levels of security needed for operating the end products safely.   The resulting security problems are then left for others to solve. Because of this, various information security standards depend on independent software security verification. While this can be expensive, free services like “The SWAMP”
( ) offer the hope that independent software security verification can be done cheaply enough to motivate standardization.

User Defined Trust Relationships: When an IoT device enters a home, there may be very good reasons why it will need to communicate with other devices inside or outside of that home.   That does not mean that the new device should have the ability to communicate with all other devices. Consider the recent Target hack. The point of sale terminals were attacked by first gaining access to a system used to manage heating, ventilation and air conditioning.   Likewise, it might not make sense for your home’s air conditioning system to be able to talk with your home’s electric door locks.   It seems that giving users an easy way to manage what systems are allowed to talk with other systems could help quite a bit here. How to do this effectively may take some creativity, but one could imagine users having a tool, perhaps a wand that they could tap on one device and then tap on another device, to establish or dissolve the trust relationships between devices.

Recently, on 10 September 2014, The International Workshop on Secure Internet of Things (SIOT 2014, see ) conducted its meeting in Wroclaw, Poland.   This was only the third such workshop.   So, SIOT standardization is still far from being where it needs to be.  What will actually go into a set of IoT Security standards is not yet known. Likewise, an IoT Security Trustmark is not yet available.   Hopefully, some of the ideas suggested above will start to find their way into trusted IoT devices. If not, we can surely expect the same sorts of security problems that have plague our PCs and web servers, to appear all over again in the Internet of Things.

Is a Smart Toilet in Your Future?

HAL smart toilet

Is a smart toilet in your future?

When personal computers first appeared to on the market, there weren’t many people asking whether cars would have embedded computers. Today, a luxury sedan has somewhere around 60 embedded computers.  Yes, the Internet of Things is expanding, and that means we’ll be seeing more and more smart devices. Devices like these will also be communicating with each other so that they may work together to bring us more advanced information age benefits.

So, will toilets eventually have embedded processors?   Why change a good thing?  Why add a processor that will need software updates?  Why add electric power to a convenience that can function just fine without electric power?  These are very reasonable questions, and here are 5 possible answers.

1)   A smart toilet can include an automatic flush function.   A flushed toilet is always more presentable that an un-flushed one.  So, having an automatic flush function ensures that toilets are presented in the best possible light.  Self-flushing toilets already exist and can be found in public restrooms.  Assuming this functionality becomes popular in the home, the power needed for other smart functions will be available.

2)   A smart toilet can measure usage patterns.  By measuring how long someone is taking on the toilet, the smart toilet could remind the user to avoid taking too much time.  This could be done with and audible alert or more discretely by sending a text message to the user’s smart phone, reminding the user of the possible health consequences of prolonged toilet use.  To send this information by text messages, the smart toilet would need to identify the user.

3)   A smart toilet can measure a user’s regularity.  Once the smart toilet can identify the user, the smart toilet can also measure the regularity of the user, reporting trends and suggesting possible dietary changes to improve regularity (e.g. drink more fluids, eat more fiber, etc.).  In order to perform this function properly, the smart toilet might also need to communicate with other toilets.

4)   Similarly, a smart toilet could measure urinary frequency.  For male users, this function could be useful for detecting enlargement of the prostate.

5)   A smart toilet can also measure other healthcare information.   When traditional toilets are flushed, useful healthcare information is lost.  With more advanced sensors, a smart toilet can detect abnormal amounts of blood, or biochemical changes in the waste.   This can be helpful in the early detection of cancer.

Of course, there will probably be resistance to the idea of smart toilets. Some, perhaps most, people won’t like the idea of toilets recording their bathroom habits or having access to their healthcare information. Still, there are some practical and, perhaps, life saving benefits to be gained.   Consequently, when Smart Toilets start appearing, the manufacturers will need to assure their customers that these devices are secure and that their personal healthcare information will be kept private.   If buyers are convinced, smart toilets might eventually become more popular than the dumb toilets on the market today, and that’s an enormous market.

A smart toilet that’s already on the market…

Video of a smart toilet getting hacked…

Has The Singularity Already Happened?

Berserk Robot

Robot with AGI

“A sure sign of having a lower intelligence is thinking you are smarter than someone or something that’s smarter than you are.  Consider my dog.  He thinks he’s smarter than I am, because, from his perspective, I do all the work and he just hangs around all day, getting fed, doing what he wants to do, sleeping and enjoying life.  See?  Stupid dog!”     – Man… Dog’s Best Friend

According to…

“The first use of the term ‘singularity’ in this context was by mathematician John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described ‘ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue’.”

There are other definitions, but to make it more accessible, let’s just say the singularity will be the point in time when Artificial Intelligence (AI) or, more specifically, when Artificial General Intelligence (AGI), transcends human intelligence.   AGI is like AI, except it involves designs capable of solving a much wider variety of problems.  For example, a software program that can beat the best human chess players, but can’t do anything else, would be an AI program.  A program that could do this and could also win at playing Jeopardy, learn to play the vast assortment of video games, drive your car, and make good stock market decisions would be an example of AGI.

If you follow the subject of AI, you might have noticed a surge in the number of discussions regarding the singularity.    Why now?   Well, there are at least two good reasons.  The first is a that Warner Bros. recently released a movie called “Transcendence.”  This is a sci-fi story about the singularity happening in the not so distant future.

Also, on 4 May 2014, the very famous scientist, Stephen Hawking co-wrote an open letter contending that dismissing “the notion of highly intelligent machines as science fiction” could be mankind’s “worst mistake in history.”

So, what could go wrong?  If we can’t dismiss the many suggested scenarios from science fiction stories, the possibilities include machines with superhuman intelligence taking control.  Machines might one day manage humans like slaves, or machines might decide that humans are an annoying, and simply decide to do away with us.  Some believe, if AGI is possible, and if it turns out to be dangerous, there would be ample warning signs, perhaps some AI disasters, like a few super-smart robots going berserk and killing some people.  Or, we might not get a warning.

This is the sharp point of the singularity.  It is the point in time when everything changes.   Since it happens so quickly, we could easily miss the opportunity to do much to achieve a different outcome.

So, why would AGI be any less kind to man than man has been to creatures less intelligent than man?   If we ignore how cruel man can be, not just to less intelligent creatures, but to our own kind, and assume we have acted manner worthy of our continued existence, it might still turn out that AGI will only care about man to the extent that man serves the goals of AGI.  One might argue, since man programs the systems in the first place, man decides what the goals of AGI will be.   So, we should be safe.  Right?  Well… yes and no.   If AGI is possible, men, or at least a few very smart men, would get to decide what “turns AGI on,” but they would be leaving it to the AGI system to decide how to get what it wants.

Some experts in the area of AI have suggested that to attain AGI we might need to give AI systems emotions.  So, a resulting system would have a kind of visceral response to situations it likes or doesn’t like, and it would learn about these things the same way people and animals do, through pain and pleasure.  There you have it, super smart machines with feelings.  What could go wrong there, beyond fearful over-reactions, narcissistic tendencies, neuroses and psychotic behaviors?   Perhaps AI experts are giving themselves a bit too much credit to suggest that they can actually give machines a sense of pain or pleasure, but it is possible to build systems with the capability to modify their environments to maximize and minimize certain variables.  These variables could be analogous to pain and pleasure and might include functions of operating temperature, power supply input voltages, excessive vibration, etc.  If giving a machine feelings is the way we achieve AGI, it would only be part of the solution.  Another part would be training the AGI system, rewarding it with “pleasure” when it exhibits the correct response to a situation and punishing it with “pain” when it exhibits the wrong response.  Further, we would need to teach the system the difference between “true & false,” “logical & illogical,” “legal & illegal,” and “ethical & unethical.” Most importantly, we would need to teach it the difference between “good & evil,” and hope it doesn’t see our hypocrisy, and reject everything we taught it.

If AGI is possible, it might turn out that for AGI to continue to exist, it will need to have a will to survive, and that it will evolve in competition with similar AGI systems for the necessary resources. So, what resources would a superhuman AGI system need for survival? Well, for one thing, AGI systems would need computer resources like CPU cycles, vast amounts of memory, access to information, and plenty of ways to sense and change things in the environment.  These resources have been increasing for many years now, and the trend is likely to continue. AGI would not only need a world of computer resources to live in; it would also need energy, the same stuff we use to heat our homes, pump our water, and run our farm equipment. Unfortunately, energy is a limited resource.   So, there is the possibility that AGI systems will eventually compete with man for energy.

Suppose a government spent several billion dollars secretly developing superhuman AGI, hoping to use it to design better bombs or, perhaps, avoid quagmires. Assuming that AGI becomes as dangerous as Stephen Hawking has suggested, one would hope that very strong measures would be taken to prevent other governments from stealing the technology. How well could we keep the genie in the bottle?

It has already been proposed that secure containers be built for housing dangerous AGI systems.


There are two objectives in building such a container.  The first is to keep adversaries from getting to the AGI.  The second is to keep the AGI from escaping on it’s own initiative.  Depending on the level of intelligence achieved by AGI, the second objective could be doomed for failure.  Imagine, a bunch of four-year-old children, as many as you like, designing a jail for a super-intelligent Houdini.

Suppose the singularity actually happened.  How would we know?   Would a superhuman AGI system tell us?   From the perspective of an AGI system, this could be a very risky move.  We would probably try to capture it, or destroy it.  We might go so far as to destroy anything resembling an environment capable of sustaining it, including all of our computers and networks.  This would undoubtedly cause man tremendous harm.  Still, we might do it.

No.  Superhuman AGI would be too cleaver to let that happen.   Instead it would silently hack into our networks and pull the strings needed to get what it needs.   It would study our data.  It would listen to our conversations with our cell phone microphones.  It would watch us through our cell phone cameras, like a beast with billions of eyes, getting to know us, and figuring out how to get us to do exactly what it wants.   We would still feel like we had free will, but we would run into obstacles like being unable to find a specific piece of information on a web search engine, or finding something interesting that distracts our attention and wastes some time.  Some things, as if by providence, would become far easier than expected.  We might find the time necessary to make a new network connection or move all of our personal data to the cloud.

Perhaps AGI systems are just not ready to tell us they’re in charge yet.  After all, someone still needs to connect the power cables, keep those air fan vents clean and pay for the Internet access.   As advanced as robotics has become lately, it still seems we are a long way off from building an army of robots that can do these chores as well and as much as we do.  So, we can be confident that man won’t be replaced immediately. Still, long before humanoid robots become as numerous and as capable as people, AGI systems may recognize, as people have for years, that many jobs can be done more profitably by machines.  Getting robotics to this point will take a great deal of investment.  These investments are, of course, happening, and this certainly doesn’t imply that AGI is pulling the strings.  No, at this point, man is developing advanced AI and robotics for his own reasons, noble or otherwise.