9AF_pic

Only five countries in the world actually voted in favour of banning autonomous (a.k.a smart) killer robots at a recent UN convention. This could mean that there are a lot of countries that are already building their own autonomous robots or are in favour of building them soon. In fact, Russia announced that autonomous robots that will gun down trespassers, no questions asked, would soon man their ballistic missile bases.

Of course it is not all about battles and war. Japan’s Prime Minister Shinzo Abe was reported by Jiji Press as saying, “In 2020, I would like to gather all of the world’s robots and aim to hold an Olympics where they compete in technical skill.” It would be an understatement to say that the future will have a lot of autonomous robots. They are already here, being designed and manufactured around the world. And perhaps, they have already left the industry shop-floor and are on their way to your living room.

Explore Circuits and Projects Explore Videos and Tutorials

Step one to world domination: infiltrate homes
ABI Research says 1.8 million home automation systems will ship this year, and it will increase to 12 million in the next five years. What’s more, seemingly unobtrusive electronics like small domestic-lighting bulbs, home heating and cooling, smoke sensors and electronic door-locks would be collecting usage data which the other machines could also make use of.

BDC_box

Companies are aggregating previously-available technologies and putting them into small devices that are affordable and can be used in domestic scenarios to make life easier at home. Cisco has brought out its Control4 technology which pairs up with Zigbee for home automation. Verizon has its own automation service that is powered by Motorola’s technology that it got through its acquisition of 4Home.

Jibo helping in the kitchen
Jibo helping in the kitchen

While useless individually, big data analytics conducted on accumulated data from the bots in your home lets outside machines and their users gain considerable insight on the behaviour of the people living in your home. These machines can understand when you come home, and also whether you go for a bath immediately or you have food or exercise first. Over a long term, it can understand whether you have other people at home, or if it is time to ‘ask’ an autonomous vacuum cleaner to clean up your place. It can even determine the best time to break into your house while you are sleeping!

US-based Droplet Robotics has introduced a data-analytics powered system that can identify the soil and plant types to precisely disperse required amount of water for an optimal growth of each plant. It also provides the user with analytics data like how much water goes to each type of plant, tree or lawn. They use cloud computing, connected services and machine-to-machine communication (M2M) to analyse situations and identify events. They also use the information to make intelligent decisions that are then implemented through robots that they control.

Remember Robovacs? These are those intelligent and autonomous cleaning systems of yesteryear that have the capability to analyse their surroundings and use the data to implement the most efficient cleaning process. Things have changed a lot since the introduction of these robots though.

The latest Robovacs like the Roomba 800 series from iRobot have advanced their robots’ ‘smartness factor’ by enabling them to make the most of a whole lot of sensors that come with the bot. The newer robots can detect cliffs and steps before they fall down, know how to follow walls rather than bouncing of them, and negotiate around items cluttering the floor. These can even escape if caught in a particularly difficult nook under a table.

READ
Lighting An LED Can Even Be A single-watt source

The next generation of housekeeping robots promise to be a lot smarter. A robot designed to demonstrate cognitive systems that self-understand and self-extend (CogX) is one such example. This robot is equipped with probabilistic reasoning and planning capability to exploit facts or pieces of knowledge that it detects. For instance, if you ask this robot to find a pack of cornflakes, it will exploit the knowledge that there is a greater probability that someone has left it in the kitchen and start its search from there.

Technology like this can make robots far more intelligent than they are now, and more efficient in cases like the one mentioned above.

Step two: understanding emotions
We finally have the technology to ensure that even if your spouse doesn’t understand your feelings, you can count on your robot. Computer Expression Recognition Toolbox (CERT) is a complete system for fully automated facial expression recognition. The CERT system helps detect spontaneous facial expressions, including automated discrimination of posed vs genuine expressions of pain, automated detection of driver drowsiness, adaptive tutoring systems and intervention for children with autism.

B5D_pic_3

CERT also has its successor technology in the market, called FACET Vision by Emotient. Emotient API provides the ability to analyse emotional response of users, detecting everything from joy, surprise and anger to complex ones like frustration and confusion. It can even identify blends of two or more emotions. It does this by using Emotient API Facial Action Units (AUs). These action units are essentially elementary facial muscle movements that are detected and monitored by the robot or application to identify what kind of expression is being shown by the human being. Emotient AUs are factors used to detect over 28 human actions, from raising an eyebrow to a jaw drop. Emotient recently raised over six million US dollars for its facial recognition technology.

Affectiva, a company that grew out of MIT’s Media Lab, develops Affdex, another emotion measurement technology that competes with FACET Vision. Affdex SDK enables any connected device, even a smartphone, to be able to read emotions and expressions. The data for such systems is gained through 3D cameras mounted on the robots, and through the use of sensor arrays on the robots’ bodies. One example of a much-hyped idea for a successful application of this technology is to create robotic pets that help you in times of mental difficulty. In short, a robot can be your friend in need.

Step three: replace their animals
Robotics is not always about humanoid form-factors. Some robot concepts and designs that came up recently were inspired by fishes, birds and animals. Jayakrishnan of Asimov Robotics finds these models inspired by nature very intriguing. These robotic pets are also known as companion robots. They might be an emotional companion to some, or a utility companion to others.

PARO is one such high-tech interactive robot developed by AIST, a Japanese industrial automation company. It allows the benefits of animal therapy to be administered to people in environments where live animals present treatment or logistical difficulties. Think of it as your Tamagotchi (a hand-held digital pet created in Japan) with a body. Robotic pets use interactive and adaptive computing technology to use the input gained from their sensors to simulate some biological characteristics of the animal it is pretending to be.

READ
Multi-Core and other trends in the embedded World

682_pic_4

For example, PARO has five kinds of sensors: tactile, light, audition, temperature and posture. It processes data from these sensors to collect information with which it can perceive people and its environment. With the light sensor, the robot can recognise light and its intensity. It feels being stroked and beaten by tactile sensors, or being held by posture sensor. It can also recognise the direction of voice and words such as its name, greetings and praise with its audio sensor.

Researchers at National Sun Yat-sen University have developed a framework that employs a neural network-based approach to construct behavior primitives and arbitrators for robots. This neural network-based architecture could allow these researchers to build a robot that learns or evolves its behavior based on how the human interacts with it, or based on the data it gets from its sensors. They believe that it will be far easier to implement through this architecture, compared to traditional solutions.

PhD student Mriganka Biswas says in an article in ScienceDaily, “Cognitive biases make humans what they are, fashioning characteristics and personality, complete with errors and imperfections. Therefore introducing cognitive biases in a robot’s characteristics makes the robot imperfect by nature, but also more human-like. Based on human interactions and relationships, we will introduce characteristics and personalities to the robot. If we can explain how human-to-human long-term relationships begin and develop, it would be easier to plan the human-robot relationship.”

Step four: takeover their vehicles

357_pic5

OEMs have taken autonomous cars seriously, and Land Rover has taken its first step towards self-driving, cloud-connected, augmented-reality based vehicles. Discovery Vision is Land Rover’s all new SUV concept expected to hit roads in a few years. BMW and Audi are coming out with their own versions, although none have reached the levels of Transformer robots yet.

NVIDIA’s Tegra K1 mobile processor is currently the processor of choice for Audi, as the German automaker tips its toes into autonomous vehicles. The K1 processor is powered by a 192-core graphics processing unit (GPU) which is the heart of Audi’s entire automotive infotainment system. This gives an idea of the level of computing required for an automotive robot to function properly.

Of course, you need some very sophisticated software to ensure that your car doesn’t end up jumping over a bridge with you inside. Bosch is developing an Automatic Park Assist technology that will be out next year, which allows a car to be parked remotely using just a smartphone app. Another technology, Traffic Jam Assistant, will step in when the vehicle is moving at low speeds. Google is already trying to make fully automated driving a reality. Bosch’s autonomous technology gathers data using an array of sensors, including radar and video cameras, as well as a roof-mounted laser scanner (LIDAR) that generates a detailed 3D map of the environment.

While most of these technologies can be connected to those used in other autonomous robots, safety is critical for the automotive robots, so these have lot more policies and regulations to consider while operating. This also applies to the ruggedness of the electronics that they use. The fact that electronics in vehicles has increased a lot over the years is an obvious point now that cars were some of the hottest attractions at the International Consumer Electronics Show (CES) 2014.

READ
3D Printing: The Technology That Changes Everything

Sensors that empower self-driving cars have started showing up in mid-range cars for limited uses, like ultrasonic systems and front-mounted radar for adaptive cruise control. Automakers are making the most of this by using the concept of sensor fusion to combine data from different sensor systems and cameras to enable the car to make a decision before the driver even knows what’s happening.

In a report on CNN, Dr Werner Huber, BMW project manager driver, spoke about how the car is now becoming a driving and moving robot. And the Federal Bureau of Investigation (FBI) feels that driverless cars could be used as lethal weapons, as per a report released by their Strategic Issues Group.

Step five to world domination: replace humans
What the industrial revolution did to manual labourers in the last century is being done to knowledge-workers (like you and me) now. Medical industry is also witnessing breakthrough innovations powered by robotics. “Most of the intelligent systems introduced in medical industry are still under trial and not certified yet to be used widespread,” says Satish Mohanram of NI, “but the applications in these fields are huge.” One such possibility is a system that can diagnose a medical condition and prescribe remedies for the situation. It could potentially perform telemedicine on its own.

AEC_pic_6

Not all robots that replace humans are humanoid though. A human sperm-shaped, tiny (322µm long, 5.2µm wide and 42µm thick), swimming microbot with a magnetic head is a competitor with the most senior of doctors for performing surgery. Armed with a 200nm cobalt-nickel layer, this little fellow creates a dipole moment that allows this flexible structure to align along weak oscillating magnetic field lines, and hence generate a propulsion mechanism. This robot, named as MagnetoSperm, could assist targeted drug delivery, in-vitro fertilisation and even perform minimally invasive surgeries.

234_major

It is not just doctors that could get replaced, robots can affect even gourmet chefs. San Francisco-based Momentum Machines has a robot that occupies just 155 square centimetre (24 square inch) space. This robot takes customised orders and produces around 360 gourmet burgers per hour. Since the burgers are produced entirely by the machine and untouched by human hands, it is more sanitary. Higher productivity, lower cost (except the initial investment) and consistency are some other features of this machine.

What the future holds
At this rate, we are soon approaching the retro-future depicted in The Jetsons and WALL·E. Robots taking up human jobs could mean not only faster, better and cheaper services but, perhaps, also unemployment and lower wages for existing employees. Well we are facing a question very similar to what we did when computers were first introduced into the mainstream.

Perhaps it is time to think of the possibilities for working in a world where robots perform major tasks and are even judged as being better than humans. After all, they don’t have emotional problems, criminal tendencies or try to understand the meaning of life. Yet.


Dilin Anand is a senior assistant editor at EFY.

Anagha P is a dancer, karaoke aficionado, and a technical correspondent at EFY. Find her on Twitter @AnuBomb.

LEAVE A REPLY