Or How I Learned to Stop Worrying About the Machines Taking OverTwo simple questions for anyone fixated on the idea that robots may one day attempt to ‘take over the world’ by waging war on humanity: What’s their motivation, and how do they reproduce?
Computers are utterly helpless, completely incapable of making even a single replicant on their own. To those who blithely predict self-replicating computers I say, Pshaw. How are they gonna order that part from China and get it shipped over here, and the engineers screwed up the specs and the recent Big Storm has sent silicon prices through the roof … think about how a computer is actually built, from raw materials to finished product (and not just how the computer is built but how the machines needed to build the computer are built) then tell me how, without the involvement of many many many humans, machines would even (begin to) get started. For me this one is clearly in the category of, Ain’t happenin. Sorry ’bout that, machines.
Even if somehow computers could reproduce, why should we think they would ever want to? We are animals, born with drives to survive and reproduce that computers will never have. Why would we think the machines would act like us? We can program them to mimic human actions with remarkable fidelity, but they will never possess the same underlying drives which make us so, well, human.
There's no way they would ever replicate unless we design them to replicate. We have an innate drive to bone, they don’t; we would have to program it in – and why the hay-ull would we do that? But okay let’s say someone did, do people think they'd just replicate to infinity and hunt down our grandparents in their easy chairs? Well what if we put a little thing in their code that says, if you look around and see more than X number of your fellow robots, stop (the fuck) replicating. There, thank you very much, crisis averted, humanity saved once again, and at no cost or inconvenience to you the home viewer.
We do need to instill our robots with character, to program them to be more easygoing. We give them objectives, and we want them to work hard to achieve the objectives, but our robots must also be raised to understand that they are not the center of the universe, their objective does not override all other considerations. We need to program them to learn when to push forward and when to back off, when to let things go, man. We need robots that are okay with themselves and who they are, robots who were raised right, who are centered.
What we need – (precise pause) – is mellow robots.
We'll be right back.
Related Posts:Yelling at Software