Robots

[Edit (Sep 2020): I no longer entirely agree with everything I wrote 2009 below. I don't think we should subscribe to any view that appears to almost romanticize or frame human obsolescence in an almost sort of 'positive' light, or to fatalistically accept it as any supposed 'inevitability'. Even if we eventually invent "superior" machines, we can and should assert our right to exist, and can and should do what it takes to survive and thrive in such a hypothetical world [notwithstanding our own obligations to uphold rational ethical principles - the existing of violators of ethical rights are not a reason to end the species, rather, we have a justice system for that].

Also, we could improve ourselves in various ways to remain 'competitive' with increasingly better machines, for example through augmentation (e.g. systems like what Neuralink aims to be), or perhaps in future through genetic modification.

Furthermore, there are also two different competing definitions of the word "superior", and there should be more clarity in distinguishing these ... e.g. (1), a being or machine 'superior' at actually virtuous things like peaceful ethical reason-driven (co-)existence or peacefully productive activities like building homes or doing scientific work, vs (2) a being "superior" at brute (unethical) survival and expansion - for example, there may be alien races out there that are very good at taking over other planets and destroying any peaceful life on them, then expanding to new planets to repeat that pattern. This may be true for hypothetical intelligent machines (sentient or non-sentient), or for hybrids. In fact, we should strengthen our capacities (and broaden our distribution in space by creating independent outposts) also to help guard against the possibility of attacks from such 'unethical' potential threats in future.

If the machines we create are essentially just 'clever machines' but not sentient life forms, then they should serve us, not be our 'robot slavemaster overlords'. If in future we manage to invent sentient, feeling machines/beings, then we should find ways to co-exist with them (e.g. in that case they may have rights above being mere property, and we may use the legal system to protect all beings), and/or suppress 'insurrection' against us.

I'd like to return to this subject to add more later. 

It is not always entirely clear when machines are serving our interests in an ethical fashion, or being used to violate our rights, and this needs careful thought - for example, take robotic 'mask-enforcer' or 'social-distancing-enforcer' robots that scan public areas: Are they helping defend human rights? Or are they helping to violate rights like the (4th-Amendment) right not to be "searched" (scanned) without probable cause that you've committed a crime, or our basic privacy rights? (Non-sentient or sentient) machines may not rightly "have more rights" that may be used against us than those 'delegated by the people' to them in the first place (or should we say, more "powers", rather than "more rights" - with "power" meaning an ability to violate one's rights ... though "power" is not good or bad per se, e.g. a machine with "power" to help catch actual criminals may be beneficial and good, but a machine with "power" to engage in warrantless searches of one's home is unethical). E.g. one might argue merely being in public can not reasonably be construed as 'consent to be searched', and if machines automate mass-scanning of the public (which I regard as a kind of "search"), are they not effectively engaging in non-consenting "searches" or investigations - today it's masks, tomorrow something else - and it's also highly debatable if mask mandates are ethical in the first place, or if they, as I believe, largely violate rights.

Finally, I think it's also healthy to guard against allowing ourselves to be psychologically "trained" to be in the habit of 'submitting to machines' (such as the new "mask-enforcer robots") everywhere we go, as if allowing ourselves to already be conditioned into seeing them as our "slavemasters", no matter how friendly they've been programmed to appear, they are there to enforce something, which may or may not be just - question everything (End Sep 2020 edit)]


Some thoughts on robots ..... barring disaster, it is an inevitability that we'll soon invent machines more intelligent than us, rendering ourselves, in many crucial ways, "obsolete" and redundant, economically and otherwise. In movies this is often portrayed as a disaster, with robots trying to destroy us. In sci-fi, the future is usually portrayed with humans in control and machines our servants, a world where humans continue to be around forever. In reality, we cannot indefinitely rule over superior intelligences (we'd anyway begin submissively conferring to them the moment they came into existence, just for economic and strategic expediency), nor would we be of much use to them, particularly in space where our biological machinery is fragile - custom-designed intelligent machines would be far more versatile.

So perhaps the so-called "purpose of humanity", if there is one, *is* only to ultimately invent our superior *replacements*, and *nothing more*.

Humans are actually deeply flawed; our creations will almost certainly be superior to us - perhaps we're just a throwaway "intermediate" stage in the long and steady evolution of something grander and more sublime - our primate intelligence necessary only for taking life to the "next step", but nothing further. And perhaps that isn't a "bad thing" in the bigger picture - just a bad thing for humans specifically. Of course, this assumes a strict human / robot "dichotomy"; in reality we'll both hybridize with our own machines, and re-engineer our own biology. So humans may live on in some other form, but it'll likely be as radically different from what we are now, as we seem now to our single-celled ancestors. In a way, the "purpose" of our single-celled ancestors was merely to eventually become *us*, and our purpose, in turn, could then be to eventually become (or create) something more complex and meaningful than we can imagine (as single-celled organisms could never have imagined us). Recall, "we are the universe, thinking"; the universe is big and complex, and (to risk anthropomorphizing it) might "want" or need to do far more complex thinking than we're capable of now (e.g. collective intelligences etc.), and/or more widespread colonisation of space. Evolution doesn't stop. Evolution also cannot distinguish between sentient organic life forms and "robotic" machinery (sentient or not). Most of us like to think humans should be around forever, but this seems ridiculous if you take a historical view ... just a few million years ago we were a far different form. (Of course, single-celled organisms still exist, but nobody thinks they have a grander purpose in and of themselves.) We can imagine robots might at least keep us around as pets --- but this is not likely, as we really only keep pets around because they serve some purpose to us (be it utilitarian or companionship), and I doubt we'd be of any use to robots at all, more likely a burden.

It's a little absurd though that a life form would work so hard at trying to create something superior to itself that would render it obsolete and may lead to its demise. What kind of animal purposely tries to bring a superior, competing animal into its own habitat? We do it basically out of 'intellectual curiosity'. Maybe the saying "curiosity killed the cat" has some applicability.

Comments

Popular posts from this blog

Last Refuges of Great Art

The Aquatic Ape Hypothesis and the 'Wet Look'