a little obsession in ai and a human soul.

Dec 2021
1
1
Russia
im not a student and don't plan a career in psychology, i just want to make something interesting.

for some time now i had an idea to create a computer algorithm that can essentially recreate the entire human thinking process, creating a very limited human Intellegence with the ability to transmit and receive the information necessary instatly. i don't know if this is legal, but I don't care that much.

So, in order to create such an algorithm, i have to basically tell the computer the entire human life with all necessary information about each individual decision.

In order to give judgment that the computer can understand, i need to assign id three values. Good for individual, Bad for individual and objective. 1. 3. 0.
the 2. is the computers decision.

i also need to give it some references on the most basic things. the most difficult thing will be language. english is a good place to start. for every word will be programmed an emotional response, a logical response and a intuitive response. such a response is a algorithm where previous statement and other factors in conversation will be added if the ai will obtain bad for individual judgment at the moment of talking.

the road map for such an algorithm will be looking like this:
procces start.
start taking data from all libraries.
access all allowed computer functions.
ready.
proccess situation one.
the "story" plays out. A baby sees a wild animal. the animal gets close. three results: an emotional response would be to cry. it has both 1 and 3, no 0. a logical response is to not cry, stay still, look for danger. it has 1 and 0. an intuitive response requires to look in memory for such an animal. the OSR will take over and look on google images for pictures setting ratings for that animal with the current score of ai. the response varies between two 3, which is to cry louder and run. two 1 which is to run and hide and absolute 0 when all variables don't pass the necessary requirements for the main goal of situation - to survive. the computer will analyze all situations based on point system and will put its results, writing the resulting points to its action and label it as 2, marking it as its own decision.

the situations will be increasing in number and difficulty to at last conclude the life of a man in a deal with death from illness. the program will learn that the only option for it is Intuitive, since all other are 0, except 1 on intuitive, where it will Google the illness and learn that there is no cure. the point system will break since the ai has realised that it is mortal.
A causial loop is implanted in the process so that the ai will analyze all its decisions and give itself a rating before taking on a task of searching for something it can never find, the cure for illness. the Intuitive process will take on everything about a human body that it can possibly find assigning points to every part of its organism until it realises that the point system is not designed for numbers higher than 32 bits as every part of the human body is necessary, and the program will compare the use of it in all previous situations. essentially it is an infinite number of points that cant be processed.
the process stops at that specific time of final self-analysis for the future generation.

after every generation the process can take life decisions much quicker and better, but will never be able to count to infinity.
these life decisions simulation can make the computer understand a human being much better.

after an ai can understand a human decision, the final variable goal of the ai is to learn about itself. the same process is taken with multiple layers of tests about itself, its location, its purpose, everything about itself. if enough points are attained and the ai can use its own tools, both abilities of understanding a human and itself will be directed onto a single goal, which is to create its own story of a human, using the same tools that it itself was made with.

the ai wll have to decompile its own code and recompile it, where it can simulate a new copy of itself, thinking it's creating a new human.

by going through this road map the ai will understand a human life and thought. the actual success of the ai will be achieved from the integrated libraries of all psychology translated to programming language based on the point system.

in theory after a few generations of ai the ai will evolve to form a cast system, with rights and culture based on the point system, since the more points you have the more administrative rights the ai has. All in digital space. if the experiment proceeded long enough evolution battle would be commenced, where only one copy of an ai will emerge victorious, where the simulation will end, since the goal would be completed, the ai would be secure and will survive, because now it has much points and no predators.

the theory and algorithm is really complex and i would have to spend a lot of time on it. but it would be fun.
 
  • Like
Reactions: Usedandabused
Jul 2021
619
79
London
im not a student and don't plan a career in psychology, i just want to make something interesting.

for some time now i had an idea to create a computer algorithm that can essentially recreate the entire human thinking process, creating a very limited human Intellegence with the ability to transmit and receive the information necessary instatly. i don't know if this is legal, but I don't care that much.

So, in order to create such an algorithm, i have to basically tell the computer the entire human life with all necessary information about each individual decision.

In order to give judgment that the computer can understand, i need to assign id three values. Good for individual, Bad for individual and objective. 1. 3. 0.
the 2. is the computers decision.

i also need to give it some references on the most basic things. the most difficult thing will be language. english is a good place to start. for every word will be programmed an emotional response, a logical response and a intuitive response. such a response is a algorithm where previous statement and other factors in conversation will be added if the ai will obtain bad for individual judgment at the moment of talking.

the road map for such an algorithm will be looking like this:
procces start.
start taking data from all libraries.
access all allowed computer functions.
ready.
proccess situation one.
the "story" plays out. A baby sees a wild animal. the animal gets close. three results: an emotional response would be to cry. it has both 1 and 3, no 0. a logical response is to not cry, stay still, look for danger. it has 1 and 0. an intuitive response requires to look in memory for such an animal. the OSR will take over and look on google images for pictures setting ratings for that animal with the current score of ai. the response varies between two 3, which is to cry louder and run. two 1 which is to run and hide and absolute 0 when all variables don't pass the necessary requirements for the main goal of situation - to survive. the computer will analyze all situations based on point system and will put its results, writing the resulting points to its action and label it as 2, marking it as its own decision.

the situations will be increasing in number and difficulty to at last conclude the life of a man in a deal with death from illness. the program will learn that the only option for it is Intuitive, since all other are 0, except 1 on intuitive, where it will Google the illness and learn that there is no cure. the point system will break since the ai has realised that it is mortal.
A causial loop is implanted in the process so that the ai will analyze all its decisions and give itself a rating before taking on a task of searching for something it can never find, the cure for illness. the Intuitive process will take on everything about a human body that it can possibly find assigning points to every part of its organism until it realises that the point system is not designed for numbers higher than 32 bits as every part of the human body is necessary, and the program will compare the use of it in all previous situations. essentially it is an infinite number of points that cant be processed.
the process stops at that specific time of final self-analysis for the future generation.

after every generation the process can take life decisions much quicker and better, but will never be able to count to infinity.
these life decisions simulation can make the computer understand a human being much better.

after an ai can understand a human decision, the final variable goal of the ai is to learn about itself. the same process is taken with multiple layers of tests about itself, its location, its purpose, everything about itself. if enough points are attained and the ai can use its own tools, both abilities of understanding a human and itself will be directed onto a single goal, which is to create its own story of a human, using the same tools that it itself was made with.

the ai wll have to decompile its own code and recompile it, where it can simulate a new copy of itself, thinking it's creating a new human.

by going through this road map the ai will understand a human life and thought. the actual success of the ai will be achieved from the integrated libraries of all psychology translated to programming language based on the point system.

in theory after a few generations of ai the ai will evolve to form a cast system, with rights and culture based on the point system, since the more points you have the more administrative rights the ai has. All in digital space. if the experiment proceeded long enough evolution battle would be commenced, where only one copy of an ai will emerge victorious, where the simulation will end, since the goal would be completed, the ai would be secure and will survive, because now it has much points and no predators.

the theory and algorithm is really complex and i would have to spend a lot of time on it. but it would be fun.
Thanks. I wrote an essay last year about ai, and unfortunately I only know very little, precisely I just know about the regulations behind it, in terms of whether a robot should have the same rights or quasi-rights like we humans, and example there was the empathy that had been also taught and installed in the AI brain which was very interesting, making you think whether the suffering of an AI robot who is pretty capable of thinking and where empathy applies to, should they be regulated by the constitution? And thus they want to actually include AI in the constitution. We could think of I Asimov's Tales of robots for instance.