Monday, November 07, 2011

Robots that teach themselves...

Further to my earlier blogs on the subject of the latest developments in robotics and the economic, social and political threat they represent to humanity as a whole:

For the first time in the world, Tokyo Institute of Technology's Associate Professor Osamu Hasegawa, has developed a system that allows robots to look around their environment and combine that with research on the Internet, enabling them to "think" how best to solve a problem.

Naturally, we expect robots to process and perform the tasks they are preprogrammed to do, but the Self-Organizing Incremental Neural Network, or "SOINN," is an algorithm that allows robots to use their knowledge to infer how to complete tasks they have been told to do but NOT programmed to do. By the way, that includes learning and performance in relation to objects they have never encountered earlier.

Self-learning in relation to material objects will soon be followed by self-learning in relation to some (and then all) aspects of animal and human behaviour....

How long before these self-learning robots teach themselves to decide to do tasks they are not asked to do? Sphere: Related Content

1 comment:

Andi and Sheba Eicher said...

Thanks for the thought Prabhu,

Robotics challenge us esp. in the areas of deciding what is human and what is not. One thing is clear - the accountability for our human machines has to rest with the makers and operators - something that we are seeing lived out with the drone-based killing going on in the Af-Pak area.

National Geographic had a chilling article on robotics a few months ago: http://ngm.nationalgeographic.com/2011/08/robots/carroll-text/2 in which this 'gem' was embedded:

Arkin says it isn't the ethical limitations of robots in battle that inspire his work but the ethical limitations of human beings. He cites two incidents in Iraq, one in which U.S. helicopter pilots allegedly finished off wounded combatants, and another in which ambushed marines in the city of Haditha killed civilians. Influenced perhaps by fear or anger, the marines may have "shot first and asked questions later, and women and children died as a result," he says.

In the tumult of battle, robots wouldn't be affected by volatile emotions. Consequently they'd be less likely to make mistakes under fire, Arkin believes, and less likely to strike at noncombatants. In short, they might make better ethical decisions than people.

we have always wanted to play God - and have never wanted to take responsiblity for the consequences - some forms of robotics take this and put it into 'reality'