Category Archives: Robotics

Is a Smart Toilet in Your Future?

HAL smart toilet

Is a smart toilet in your future?

When personal computers first appeared to on the market, there weren’t many people asking whether cars would have embedded computers. Today, a luxury sedan has somewhere around 60 embedded computers.  Yes, the Internet of Things is expanding, and that means we’ll be seeing more and more smart devices. Devices like these will also be communicating with each other so that they may work together to bring us more advanced information age benefits.

So, will toilets eventually have embedded processors?   Why change a good thing?  Why add a processor that will need software updates?  Why add electric power to a convenience that can function just fine without electric power?  These are very reasonable questions, and here are 5 possible answers.

1)   A smart toilet can include an automatic flush function.   A flushed toilet is always more presentable that an un-flushed one.  So, having an automatic flush function ensures that toilets are presented in the best possible light.  Self-flushing toilets already exist and can be found in public restrooms.  Assuming this functionality becomes popular in the home, the power needed for other smart functions will be available.

2)   A smart toilet can measure usage patterns.  By measuring how long someone is taking on the toilet, the smart toilet could remind the user to avoid taking too much time.  This could be done with and audible alert or more discretely by sending a text message to the user’s smart phone, reminding the user of the possible health consequences of prolonged toilet use.  To send this information by text messages, the smart toilet would need to identify the user.

3)   A smart toilet can measure a user’s regularity.  Once the smart toilet can identify the user, the smart toilet can also measure the regularity of the user, reporting trends and suggesting possible dietary changes to improve regularity (e.g. drink more fluids, eat more fiber, etc.).  In order to perform this function properly, the smart toilet might also need to communicate with other toilets.

4)   Similarly, a smart toilet could measure urinary frequency.  For male users, this function could be useful for detecting enlargement of the prostate.

5)   A smart toilet can also measure other healthcare information.   When traditional toilets are flushed, useful healthcare information is lost.  With more advanced sensors, a smart toilet can detect abnormal amounts of blood, or biochemical changes in the waste.   This can be helpful in the early detection of cancer.

Of course, there will probably be resistance to the idea of smart toilets. Some, perhaps most, people won’t like the idea of toilets recording their bathroom habits or having access to their healthcare information. Still, there are some practical and, perhaps, life saving benefits to be gained.   Consequently, when Smart Toilets start appearing, the manufacturers will need to assure their customers that these devices are secure and that their personal healthcare information will be kept private.   If buyers are convinced, smart toilets might eventually become more popular than the dumb toilets on the market today, and that’s an enormous market.

A smart toilet that’s already on the market…
http://singularityhub.com/2009/05/12/smart-toilets-doctors-in-your-bathroom/

Video of a smart toilet getting hacked…
http://www.forbes.com/sites/kashmirhill/2013/08/15/heres-what-it-looks-like-when-a-smart-toilet-gets-hacked-video/

Has The Singularity Already Happened?


Berserk Robot

Robot with AGI

“A sure sign of having a lower intelligence is thinking you are smarter than someone or something that’s smarter than you are.  Consider my dog.  He thinks he’s smarter than I am, because, from his perspective, I do all the work and he just hangs around all day, getting fed, doing what he wants to do, sleeping and enjoying life.  See?  Stupid dog!”     – Man… Dog’s Best Friend

According to…

https://en.wikipedia.org/wiki/Technological_singularity

“The first use of the term ‘singularity’ in this context was by mathematician John von Neumann. In 1958, regarding a summary of a conversation with von Neumann, Stanislaw Ulam described ‘ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue’.”

There are other definitions, but to make it more accessible, let’s just say the singularity will be the point in time when Artificial Intelligence (AI) or, more specifically, when Artificial General Intelligence (AGI), transcends human intelligence.   AGI is like AI, except it involves designs capable of solving a much wider variety of problems.  For example, a software program that can beat the best human chess players, but can’t do anything else, would be an AI program.  A program that could do this and could also win at playing Jeopardy, learn to play the vast assortment of video games, drive your car, and make good stock market decisions would be an example of AGI.

If you follow the subject of AI, you might have noticed a surge in the number of discussions regarding the singularity.    Why now?   Well, there are at least two good reasons.  The first is a that Warner Bros. recently released a movie called “Transcendence.”  This is a sci-fi story about the singularity happening in the not so distant future.

Also, on 4 May 2014, the very famous scientist, Stephen Hawking co-wrote an open letter contending that dismissing “the notion of highly intelligent machines as science fiction” could be mankind’s “worst mistake in history.”

So, what could go wrong?  If we can’t dismiss the many suggested scenarios from science fiction stories, the possibilities include machines with superhuman intelligence taking control.  Machines might one day manage humans like slaves, or machines might decide that humans are an annoying, and simply decide to do away with us.  Some believe, if AGI is possible, and if it turns out to be dangerous, there would be ample warning signs, perhaps some AI disasters, like a few super-smart robots going berserk and killing some people.  Or, we might not get a warning.

This is the sharp point of the singularity.  It is the point in time when everything changes.   Since it happens so quickly, we could easily miss the opportunity to do much to achieve a different outcome.

So, why would AGI be any less kind to man than man has been to creatures less intelligent than man?   If we ignore how cruel man can be, not just to less intelligent creatures, but to our own kind, and assume we have acted manner worthy of our continued existence, it might still turn out that AGI will only care about man to the extent that man serves the goals of AGI.  One might argue, since man programs the systems in the first place, man decides what the goals of AGI will be.   So, we should be safe.  Right?  Well… yes and no.   If AGI is possible, men, or at least a few very smart men, would get to decide what “turns AGI on,” but they would be leaving it to the AGI system to decide how to get what it wants.

Some experts in the area of AI have suggested that to attain AGI we might need to give AI systems emotions.  So, a resulting system would have a kind of visceral response to situations it likes or doesn’t like, and it would learn about these things the same way people and animals do, through pain and pleasure.  There you have it, super smart machines with feelings.  What could go wrong there, beyond fearful over-reactions, narcissistic tendencies, neuroses and psychotic behaviors?   Perhaps AI experts are giving themselves a bit too much credit to suggest that they can actually give machines a sense of pain or pleasure, but it is possible to build systems with the capability to modify their environments to maximize and minimize certain variables.  These variables could be analogous to pain and pleasure and might include functions of operating temperature, power supply input voltages, excessive vibration, etc.  If giving a machine feelings is the way we achieve AGI, it would only be part of the solution.  Another part would be training the AGI system, rewarding it with “pleasure” when it exhibits the correct response to a situation and punishing it with “pain” when it exhibits the wrong response.  Further, we would need to teach the system the difference between “true & false,” “logical & illogical,” “legal & illegal,” and “ethical & unethical.” Most importantly, we would need to teach it the difference between “good & evil,” and hope it doesn’t see our hypocrisy, and reject everything we taught it.

If AGI is possible, it might turn out that for AGI to continue to exist, it will need to have a will to survive, and that it will evolve in competition with similar AGI systems for the necessary resources. So, what resources would a superhuman AGI system need for survival? Well, for one thing, AGI systems would need computer resources like CPU cycles, vast amounts of memory, access to information, and plenty of ways to sense and change things in the environment.  These resources have been increasing for many years now, and the trend is likely to continue. AGI would not only need a world of computer resources to live in; it would also need energy, the same stuff we use to heat our homes, pump our water, and run our farm equipment. Unfortunately, energy is a limited resource.   So, there is the possibility that AGI systems will eventually compete with man for energy.

Suppose a government spent several billion dollars secretly developing superhuman AGI, hoping to use it to design better bombs or, perhaps, avoid quagmires. Assuming that AGI becomes as dangerous as Stephen Hawking has suggested, one would hope that very strong measures would be taken to prevent other governments from stealing the technology. How well could we keep the genie in the bottle?

It has already been proposed that secure containers be built for housing dangerous AGI systems.

See http://m.livescience.com/18760-humans-build-virtual-prison-dangerous-ai-expert.html

There are two objectives in building such a container.  The first is to keep adversaries from getting to the AGI.  The second is to keep the AGI from escaping on it’s own initiative.  Depending on the level of intelligence achieved by AGI, the second objective could be doomed for failure.  Imagine, a bunch of four-year-old children, as many as you like, designing a jail for a super-intelligent Houdini.

Suppose the singularity actually happened.  How would we know?   Would a superhuman AGI system tell us?   From the perspective of an AGI system, this could be a very risky move.  We would probably try to capture it, or destroy it.  We might go so far as to destroy anything resembling an environment capable of sustaining it, including all of our computers and networks.  This would undoubtedly cause man tremendous harm.  Still, we might do it.

No.  Superhuman AGI would be too cleaver to let that happen.   Instead it would silently hack into our networks and pull the strings needed to get what it needs.   It would study our data.  It would listen to our conversations with our cell phone microphones.  It would watch us through our cell phone cameras, like a beast with billions of eyes, getting to know us, and figuring out how to get us to do exactly what it wants.   We would still feel like we had free will, but we would run into obstacles like being unable to find a specific piece of information on a web search engine, or finding something interesting that distracts our attention and wastes some time.  Some things, as if by providence, would become far easier than expected.  We might find the time necessary to make a new network connection or move all of our personal data to the cloud.

Perhaps AGI systems are just not ready to tell us they’re in charge yet.  After all, someone still needs to connect the power cables, keep those air fan vents clean and pay for the Internet access.   As advanced as robotics has become lately, it still seems we are a long way off from building an army of robots that can do these chores as well and as much as we do.  So, we can be confident that man won’t be replaced immediately. Still, long before humanoid robots become as numerous and as capable as people, AGI systems may recognize, as people have for years, that many jobs can be done more profitably by machines.  Getting robotics to this point will take a great deal of investment.  These investments are, of course, happening, and this certainly doesn’t imply that AGI is pulling the strings.  No, at this point, man is developing advanced AI and robotics for his own reasons, noble or otherwise.