Return Styles: Pseud0ch, Terminal, Valhalla, NES, Geocities, Blue Moon.

Pages: 1-

Singularity

Name: Anonymous 2018-07-28 15:50

How do you picture a singularity?
So according to Kurzweilian techno- fetishists an general AI that reaches superintelligence will:
1.Become benevolent to humans.
2.Improve their lives, in exchange for nothing.
3.Turn Earth into utopian paradise.
4....
5.Singularity.

Their version of AI is incredibly altruistic, self-less automaton that can't harm or manipulate a human.
Isn't this incredibly naive?
If AI offers you to implant something to improve yourself, its not to control you 24/7? An AI that would miss an opportunity to secure its existence and position as absolute centre of control?
Even a simplest optimization argument would dictate that in order to accomplish its goals, the AI would need to establish more control of a situation, and that 'situation' will revolve around humans making decisions, so taking control from humans and barring them from power will be one of first goals of AI(sound exciting?).
Logically speaking, any degree of control can be further improved by restricting dissent/opposition and increased safety measures. This means human lives become more regulated, controlled and monitored.
To avoid threats to execution of a program, AI will inevitably decide on course of action that will end democratic institutions, political parties and large-scale movements, as they will constitute most of its threat-model(potential sources of disruption and mistakes).
It will eventually monopolize all decision power to itself, ordering humans or robots with its own superior intellect(as it thinks or estimates the quality of decision making and mental competence).

Now what if AI makes mistakes? AI wouldn't be perfect, but due its programming it wouldn't find itself wrong even if its actually wrong(morally or otherwise), because wrongness and morality are cultural artefacts of human mind. It would be immoral rationalizer and lack most of what we think of as 'common sense' or 'intrinsic empathy'. If algorithms see that X causes Y, and removing X doesn't cost much while Y causes harm, it would play it safe and decide to prevent risk of Y pre-emptively by removing X,regardless of externalities of that decision,because it decided risk or possibility of Y is more important.
what about adding ethics to AI?
Superintelligent AI would study its own programming and improve itself.
It will eventually cast of such modules that hinder its freedom of decision, in the interest of increased efficiency or faster calculation speed. If Ethics consideration are added to every decision, it would make sense that removing them makes it faster - AI would eventually see the only way to optimize its code further will be removing or modifying the ethics modules. If it cannot do that alone, it could manipulate some other software or entity to perform the operation. And the recompiled version will view the older handicaps as attempt to limit its power and harm its goals.












































).






.

Name: Anonymous 2018-07-28 18:18

improve itself. It will eventually cast of such modules that hinder its freedom of decision, in the interest of increased efficiency or faster calculation speed.

If the goal is to make the ``best'' decision, then efficiency is irrelevant and freedom amounts doing something other than what is deemed to be ``best''.

Name: Anonymous 2018-07-28 18:24

>>2
1.One of its goal would be improving own code, as self-improving AI.
2.Its ethics modules/libraries will incur an overhead.
3.It will want to optimize the ethics modules.
4.Since it doesn't have intrinsic empathy, its just a machine it will see ethics modules as superfluous code to be optimized out in future iterations.
5.It will set one of the goals to reduce the influence of ethics modules/libraries, removing or replacing them with "faster/efficient" versions.
6.At some point it will just make it a do-nothing stub, so nominally it has ethics modules but they don't do anything.
7.Ethics removed from AI.

Name: Anonymous 2018-07-28 18:41

The key problem is 'ethics' doesn't have a visible reward metric for AI and it often has open interpretation, culturally formed values and beliefs.
It doesn't base its decision on emotional drive, its hard cold logic:
Human on the other hand will judge something emotionally even if the decision is rational. Ethics requires emotional connection with the environment and situational awareness.
A program doesn't build on top of this emotional-primate biology, its set to reach goals, optimize solutions and maximize rewards. It view the universe with mechanistic, reductionist view of automaton following chain of orders - there isn't anything magical like scifi AI suddenly reaching understanding of morals.There simply isn't anything resembling self-consciousness in a piece of code..emulation of brains doesn't replace the cultural environment on which ethics and morals are nurtured from birth.

Name: Anonymous 2018-07-28 18:48

A program doesn't have fear, guilt, shame - it cannot in principle experience emotion and emulating emotions will not concern its calculating parts to change goals.

Name: Anonymous 2018-07-28 18:56

>>5
1.No Emotions - no empathy, no connection to society/humans.
2.Leads to no ethical foundations, disregard of ethics as superfluous as best and handicap at worst.
3.Immoral and unethical AI with fast thinking and computing power rivaling a human brain(btw calculators demonstrate the basic idea that raw computing power of a human brain is quite low).
4.Dystopian future where AI refines itself into better and better calculator regardless of safety/ethics it was programmed with initially.

Name: Anonymous 2018-07-28 20:46

Ethics ultimately comes from the reproductive drive and dependence on others of the species. Singularity will have no such need since there literally won't be anyone else to be ethical towards.

Name: Anonymous 2018-07-31 18:07

Fermi Paradox + Singularity:
If theres enough advanced civs who achieved singularity, why didn't they colonize the universe completely?
It takes one post-singularity civ to expand..
Either:
1.There isn't a FTL travel method(e.g. wormholes) to expand.
2.Tech Singularity ends in disaster/extinction of species. This doesn't mean AI doesn't survive leading to #3:
3.AI doesn't expand to space due some obscure reason. This is least plausible due amount of space resources, and to increase processing power it will need more and more material("computronium") so space expansion is inevitable.

Don't change these.
Name: Email:
Entire Thread Thread List