More and more people have interactions with robots, smart devices, intelligent software, prosthetics and implants. They are used in industry, the education system, health care, our homes, the entertainment industry and military applications. The potential benefits are abundant. However, there are considerable ethical and legal challenges considering dealing with Robots and Artificial Intelligence.
Traditional robots are equipped with the ability to pick objects in a predetermined trajectory as long as the objects are already known and their spatial location specified. Modern robots when fitted with sensors are programmable via AI to pinpoint a specific object despite its location on the working space. Through a segment of AI known as machine learning, robots are teaching themselves within a short period how to handle objects it hasn’t handled before.
Robots of the future will have the capacity to take over risky jobs like handling radioactive substances or disabling bombs. In addition, AI robots can withstand working in unfavorable environments such as extremely noisy conditions, scorching heat, and toxic environments. Consequently, Artificial Intelligence robots will save countless lives.
Though some believe that robots are dangerous for our society and could gradually make humans obsolete, the reality is far less dramatic, at least in the retail world. Today, tech companies are developing smart robots suitable for a variety of services. One of robots’ tasks might be to preserve security and safety,There’s a phenomenon called the Peltzman Effect, based on research from an economist at the University of Chicago who studied auto accidents. He found that when you introduce more safety features like seat belts into cars, the number of fatalities and injuries doesn’t drop. The reason is that people compensate for it. When you have a safety net in place, people will take more risks. That probably is true of the economic arena as well.
We are yet to know..
In order for machines to perform in an intelligent and autonomous way, they need to collect data. What do we expect from intelligent software in terms of morality? Are we willing to give up privacy and personal liberty to interact with machines? Who is responsible when a robot causes an accident?
Check up: Starlink Satellite Internet Project